Visit additional Tabor Communication Publications
November 03, 2011
Just three and half years after IBM broke the petaflop barrier with its Roadrunner supercomputer, Fujitsu's "K computer" has passed the 10 petaflops mark. Fujitsu and RIKEN announced on Tuesday that they have completed the final build-out of the system and achieved 10.51 petaflops on Linpack, reaching a major milestone of Japan's Next-Generation Supercomputing Project.
In June of this year, Fujitsu and RIKEN captured the number one spot on the TOP500 with a Linpack result of 8.16 petaflops for the partially completed K system. It marked the first time a Japanese system was number one on the list since the Earth Simulator supercomputer held the title from 2002 through 2004.
The completed K system, housed at RIKEN's Advanced Institute for Computational Science in Kobe, is powered by more than 88 thousand SPARC64 VIIIfx CPUs. The 8-core SPARC64 VIIIfx chip was purpose-built for HPC, delivering 128 peak gigaflops at 2.0 GHz, while drawing a relatively modest 58 watts. Although each CPU represents a single node, four of the SPARC chips are glued to a single motherboard, 24 of which make up a rack. The whole system is comprised of 864 of these racks.
The peak petaflops for the final system is a whopping 11.28 petaflops, and thanks to the Fujitsu's 6D Tofu interconnect, the system was able to squeeze better than 93 percent Linpack efficiency from the floating pointing parts -- a rather remarkable feat. Total time for the Linpack run: 29 hours and 28 minutes.
Of course, the real value of all these flops is not Linpack. The K is destined for all sorts of big science workloads, including nanotechnology simulations, drug discovery, materials design, climate prediction, industrial design, and cosmology, among others. The multi-petaflops capabilities of the machine should enable some of these application to push the envelope of their respective domains.
Applications aside, Japanese supercomputing prestige is soaring with the K machine right now, and unless there's a surprise Chinese system waiting in the wings to overtake the it, the system will retain its title as the most powerful computer on the planet. It looks like all other double-digit-petaflop machines in the pipeline won't be up and running until next year.
If IBM hadn't parted ways with NCSA over the Blue Waters Project, the K system might already have had some serious competition from the US. Blue Waters, which was also supposed to be a 10-petaflop system, in this case based on Power7 technology, was originally slated to come online toward the end of this year. Obviously, that's not going to happen.
Another contender is the Jaguar supercomputer upgrade at Oak Ridge National Lab (ORNL), which will result in a 10 to 20-petaflop system. That machine, which will be renamed "Titan," will be outfitted with the next-generation "Kepler" GPUs from NVIDIA, but that work isn't expected to be completed until late 2012. The first phase of the upgrade, which involves plugging 960 Fermi-class GPUs into the machine, is already in motion, and is expected to be completed this year. But it's rather unlikely those initial enhancements will yield anything approaching 10 petaflops.
Other leading-edge petascale machines include the two big IBM Blue Gene/Q systems headed for US DOE centers: "Mira", a 10-petaflop system destined for Argonne National Lab, and Sequoia," a 20 petaflop machine, which will be installed at Lawrence Livermore. But both of these Blue Genes aren't expected to be operational until 2012.
Likewise for the 10-petaflop Dell-built cluster for TACC, named "Stampede." That machine will be relying on Intel's Many Integrated Core (MIC) coprocessor to provide most of the flops, and since the first production MIC ("Knights Corner") won't be available for at least a year, that system won't be up and running until late 2012.
Technically, the K Computer is not quite ready for prime time either. The Linpack run was part of the machine's verification process. Over the next few months, the engineers will be developing and tuning the system software system, which should be completed by June 2012. Real production users are not expected to be able to log on until November 2012.
Beyond its 10-petaflop adventure, Fujitsu would like to start selling SPARC64 VIIIfx-based servers outside of Japan. It would certainly make sense for Fujitsu to try to cash in on its investment in the SPARC chip and K design. But as impressive as the technology is, the market has not exactly embraced custom-built HPC.
For political reasons, the US government supercomputing labs would be unlikely to import foreign HPC of any flavor. And considering the attractive price-performance of x86 HPC, smaller clusters of K would probably not have much of a market in the commercial HPC space. Fujitsu could perhaps export K-type supercomputers to Europe and perhaps elsewhere is Asia. But as we saw last week, China is interested in developing its own HPC industry, and the large European centers are more apt to stick with the supercomputer vendors they know best -- mainly IBM, Cray, and Bull.
For the time being though, Fujitsu and Japan can bask in the glow of their accomplishment and enjoy their newfound position at the top of the supercomputing heap. If history is any guide, these moments tend to be rather fleeting.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.