Visit additional Tabor Communication Publications
June 20, 2011
A Japanese supercomputer took the world title for the fastest computer in the world after the latest TOP500 list was announced Monday morning at the International Supercomputing Conference in Hamburg, Germany. Fujitsu's K Computer, powered by the latest SPARC64 VIIIfx CPUs and the "Tofu" interconnect, delivered a world beating 8.162 petaflops on the Linpack benchmark, vaulting over the now second-place 2.57 petaflop Tianhe-1A supercomputer in China and third-place 1.76 petaflop Jaguar supercomputer in the US.
The last Japanese supercomputer that topped the TOP500 list was the Earth Simulator, which held the number one spot from 2002 to 2004. That system, by the way, delivered 35 teraflops, which doesn't even rate a place on the current list.
As of today, the current top 10 supers are:
That's right, all of the top 10 systems are now a petaflop or more, and the first machine that cracked the petaflop mark in 2006, IBM's Roadrunner supercomputer, has been pushed into the number 10 spot.
Unlike in years past when IBM and Cray dominated these top systems, today there's a much greater degree of vendor parity. Beside the two aforementioned supercomputer makers, Fujitsu, HP, NEC, SGI, Dawning, and Bull all claim at least one of these petaflop systems. The big surprise, of course, is Fujitsu. Long absent from the top ten, the Japan-based computer maker has made a spectacular comeback with the K deployment.
The K Computer (short for called Kei Soku Keisanki) has had a tumultuous history. The system is the result of Japan's Next-Generation Supercomputing Project, an effort led by RIKEN, a government-backed research agency. Initially the project was a joint venture involving NEC, Hitachi, and Fujitsu, with the original design mixing NEC vector processors with Fujitsu scalar ones. In 2009, NEC and Hitachi backed out of the contract, leaving Fujitsu as the lone system vendor. Subsequently, the Japanese government considered pulling the plug on the project, but later reinstated most of the funding.
The final K system set for completion in 2012 is spec'd for 10 petaflops, so one can assume that we'll see that upgrade over the next year. Nevertheless, even in its unfinished state, the K system is quite impressive. Not only is the machine more than three times as powerful, FLOPS-wise, as the number two GPU-powered Tianhe-1A, but it is even more energy efficient, delivering over 8 Linpack petaflops with less than 10 megawatts of power. That's almost as energy-efficient as the other power-sipping Japanese petaflop supercomputer, the GPU-accelerated TSUBAME 2.0 machine.
The exceptional energy efficiency of K is provided courtesy of the 8-core SPARC64 VIIIfx processor, a 58 watt chip that delivers 128 peak gigaflops. That's nearly up to the standards of an HPC-style GPU, a processor which basically does nothing but FLOPS. For comparison, an IBM Power7 CPU provides about 256 gigaflops, but consumes 200 watts, while IBM's other HPC chip, the PowerPC A2 SoC used in Blue Gene/Q looks to be around twice as energy-efficient as the current crop of GPUs.
In any case, don't expect SPARC64 VIIIfx systems to start populating the TOP500 list (or any list) in force. This is a specialty chip, even more so now, thanks to Oracle's abandonment of Sun Microsystems' supercomputing business. It does, however, demonstrate that purpose-built CPUs can deliver performance-per-watt efficiencies on par with GPUs for high performance computing.
Also, don't expect the K Computer to stake out the number one spot for very long. It will almost certainly not enjoy the two-year reign the Earth Simulator did in 2002. NCSA's Power7-based Blue Waters system is slated to hit the 10-petaflop mark when it's installed later 2011 and Lawrence Livermore National Lab's Blue Gene/Q Sequoia supercomputer is aiming for 20 petaflops when fully deployed in 2012. Also on the drawing board is the GPU-accelerated OLCF-3 system at Oak Ridge National Lab, which is expected to deliver between 10 to 20 petaflops. And China certainly has plans to build systems in the 10-petaflop range and beyond.
Speaking of which, even though China's top super got out-Linpacked this time around, the country continues to fill up TOP500 slots at a breakneck pace. The nation now has 62 supercomputers on the list, up from just 24 a year ago. As a result, China has more top machines than Germany and the UK combined, and greater than any nation except for the US. Despite that, the US still owns more than half the total systems (256) on the list. But depending upon what Asia and Europe deploy over the next six months, the number of US-based supercomputers on the TOP500 could conceivably slide below the 50 percent mark by the time the next TOP500 list comes out in November.
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.