Visit additional Tabor Communication Publications
August 14, 2008
Tell me if you've heard this one before. IBM is planning to deliver one of the fastest supercomputers in the world, to help unravel the mysteries of the universe. Deja vu?
This latest IBM colossus will be a 360 teraflop machine that is headed for the University of Toronto and is to be shared by the SciNet Consortium, a group that includes the university and a number of research hospitals. Applications destined to run on the new super include the usual HPC suspects: aerospace, astrophysics, bioinformatics, chemical physics, and climate change prediction. The new machine will also be used by the CERN-run ATLAS project, which is investigating the forces that govern the universe.
From the IBM announcement, here are the main bragging points:
"[T]he machine is expected to be among the top 20 fastest supercomputers in the world; 30 times faster than the peak performance of Canada's current largest research system. It also represents the second largest system ever built on a university campus, and the largest supercomputer outside the United States."
The new system is slated to be fully operational next summer, although a partial implementation could be running as early as January. The acquisition is part of a five-year deal that is expected to cost $47 million.
One of the unconventional aspects of the system's design is that it will incorporate both Power6- and Intel Nehalem-based clusters using IBM's new iDataPlex platform. According to company sources, 300 teraflops will be Nehalem-based nodes and 60 teraflops will be Power6-based. This is yet another example of IBM's increasing comfort with hybrid computing platforms. IBM's more famous Roadrunner system combined AMD Opterons with Cell processors to deliver the world's first petaflops computer. The idea behind hybrid computers is to give users the ability to run code on the most appropriate hardware in order to speed execution time.
Of course, the difficult part of hybrid computing is the software. Initially, individual applications will execute on one architecture or the other. Cluster Resources' Moab cluster scheduler will be used to map jobs and will also include specific enhancements for a stateless, diskless multi-architecture cluster. The entire cluster is hooked to a storage system and uses GPFS as the backend file system. Both processor architectures will share these resources.
The plan is that over time, as software is optimized for the heterogeneous environment, it will be possible for applications to execute certain stages on the Nehalem portion and other stages on the Power6 portion. IBM says the entire cluster is modular and will support the changing of the OS and architecture dynamically.
The announcement of the Canadian super comes on the heels of IBM's deal with the UK Met Office to deliver a 125 teraflop supercomputer for weather forecasting. That procurement represented a five-year contract that would have the Met Office system topping out at 1 petaflop by 2011. Unlike the Canadian machine, the UK system is pure Power6, as are the two 145 teraflops systems ordered by the European Centre for Medium-Range Weather Forecasts (ECMWF), the 125 teraflop system acquired by the Max Planck Society (MPG), a 76 teraflop system installed at the National Center for Atmospheric Research (NCAR), and the 60 teraflop machine running at SARA, in Amsterdam. That's over 600 teraflops of Power6 deployed -- or scheduled to be deployed -- since the beginning of 2008. If you ignore the Roadrunner machine as an anomaly, Power6-based supercomputers acquisitions are outpacing the much more storied Cell-based supers.
In a way, that's a little surprising. The sporty little 4.7 GHz Power6 chips tend to run hot and are among the least efficient processors in the increasingly important performance/watt metric. It's no coincidence that the Power6-based clusters are all water cooled. So why is this architecture so successful? Maybe it's because the IBM sales team could sell central heating in Singapore. Or maybe it's because there's a lot of Power-based software out there expecting great single-threaded performance and ready to hop on to the next processor generation. Whatever the reason, it will be interesting to see what the Power7 chips bring to the table. Hmm... I'll bet they'll be used to build one of the fastest computers in the world, to help unravel the mysteries of the universe.
Posted by Michael Feldman - August 13, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.