Visit additional Tabor Communication Publications
October 29, 2009
China has apparently become the third country to build a petaflop supercomputer. Xinhua news agency reported on Thursday that the country has unveiled "Tianhe," a 1.206 peak petaflop machine powered by a combination of 6,144 Intel CPUs and 5,120 AMD GPUs. Amazingly, the price tag was a mere $88.24 million. The system is installed at the National University of Defense Technology (NUDT) in Changsha, capital of central China's Hunan Province.
In the TOP500 sense, Tianhe would not be considered a true petaflop system. According to online reports, the machine achieves only(!) 563.1 teraflops with Linpack. If that number holds up, it would almost certainly earn Tianhe a spot in the top 10 of the upcoming TOP500 list. Today there are only three systems that break the 500 teraflop barrier on Linpack: Roadrunner at Los Alamos National Laboratory, Jaguar at Oak Ridge National Laboratory, and JUGENE at Jülich Supercomputing Center. China's top system on the current list is Dawning's "Magic Cube" supercomputer, located at the Shanghai Supercomputer Center. With a Linpack rating of 180.6 teraflops, the Dawning machine sits at number 15.
At some future date, NUDT is slated to add "hundreds or thousands of China-made CPUs to the machine, and improve its Linpack performance to over 800 teraflops," according to Zhou Xingming, an academician in the Chinese Academy of Sciences and a professor at NUDT.
The Xinhua news article, as well as other early reports from Chinese sources, don't provide much detail about the system's architecture. Specifically, no information was offered about the kind of Intel CPU and AMD GPU parts used, nor about the Chinese-made CPUs to be plugged in later on. At press time, NUDT could not be reached for further clarification about Tianhe's make-up, and AMD declined to offer any additional details.
If I had to speculate, I would guess that the Intel chips are Nehalem EPs and the AMD parts are FireStream 9270s. Presumably most of the FLOPS come from the GPUs. In fact, 5,000 9270s would represent 1.2 double precision petaflops all by themselves. The future Chinese CPUs are likely to be of the Godson-3 variety, which are expected to debut in 2010. Note that the Godson-3 is a MIPS architecture, but has the capability to emulate x86 instructions as well.
In addition to the sketchy details on the architecture, no mention was made of the application set the machine will be targeting. NUDT is jointly run by the Ministry of National Defense and the Ministry of Education, which gives you some idea of its areas of interest. According the university's Web site, the institution is devoted to basic sciences, engineering, military science, management, economics, philosophy, literature, education, law, and history.
Impressive as this all sounds, Tianhe's rather low Linpack efficiency (Rmax/Rpeak) may limit its applicability somewhat. Linpack usually represents a nominal high-water mark for what kind of performance you're likely to get from math-intensive applications. The NUDT machine didn't even manage to reach the 50 percent mark in efficiency -- just 563 out of a possible 1206 teraflops. Most supers have a Linpack efficiency north of 75 percent, even just for vanilla GigE clusters. The new Earth Simulator in Japan boasts a 93.4 percent figure.
Undoubtedly, the problem is related to extracting Linpack FLOPS from the GPUs. Although one would think these general purpose graphics processors would excel at this type of vector math, optimal Linpack performance is also dependent on a generous cache. Modern CPUs have plenty of it, but GPUs contain only limited internal caches. That means the graphics chip would have to access the relatively slower on-board GDDR memory to refresh its data, or worse yet, go across the PCIe bus to get some more data from CPU memory. NVIDIA's upcoming Fermi processor will be the first GPU with a true cache hierarchy (not to mention much better double precision performance), so I imagine Linpack results on this architecture should be a good deal more impressive.
In the meantime, Tianhe will represent an interesting test case for a CPU-GPU hybrid supercomputer, an architecture which is likely to become more commonplace over the next few years. It also signals China's intention to become a bigger player in the supercomputing arena. Given the country's huge cash reserves and the government's willingness to invest in high-tech, there's not much that can stop it.
Posted by Michael Feldman - October 29, 2009 @ 6:41 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.