Visit additional Tabor Communication Publications
April 26, 2011
China's meteoric rise to the top of the of the supercomputing heap has generated plenty of angst in the West. At a time when government budgets in the US, Europe and Japan are being slashed, China is investing heavily in its high performance computing capability. At the most recent IDC HPC User Forum, a presentation on China's top 100 supercomputers points to how far and how fast the nation has come in a few short years.
When China's HPC TOP100 was first published in 2002, the country had a total of 5 machines on the international TOP500 list. Since then, number of Chinese systems has grown steadily, with its fastest increase -- from 24 to 42 systems occurring last year. The latest rankings from November 2010 have China with the number one and number three machines -- the 2.5 petaflop Tianhe-1A and 1.3 petaflop Nebulae systems, respectively -- along with three other supercomputers in the top 100.
At the IDC HPC User Forum, Liang Yuan, of China's Laboratory of Parallel Software and Computational Science -- part of the Institute of Software, Chinese Academy of Sciences (ISCAS) -- talked about some interesting trends in the country's top supers. Perhaps most notable is China's aggressive adoption of GPU technology, which propelled the multi-petaflop Tianhe-1A to the number one spot in 2010. In fact, the country's top three systems are all heterogeneous CPU-GPU machines, based on Intel Xeon and NVIDIA Tesla processors.
Some other interesting facts from Liang Yuan's presentation (PPT):
The application set for these systems is pretty much on par with other high-end supercomputers around the world. Energy, industrial and research codes are the top three applications, running on 17 percent, 15 percent, and 12 percent of these TOP100 systems, respectively. Gaming applications, surprisingly, are hosted on 9 percent of the machines, representing the same proportion as government apps. Other HPC applications, including telecom, weather, biotech, finance, and a handful of others, are present in less amounts. It's not clear how accurate this application breakdown really is since it doesn't appear to account for multiple application types running on the same system.
Where the China TOP100 machines diverge most noticeably from other countries (besides the US, that is) is the proportion of systems built domestically. Overall, about half the systems, 51 percent to be exact, are derived from US-based vendors, with the remaining 49 percent built by Chinese manufactures. IBM and HP dominate the foreign OEMs, with a 28 percent and 19 percent share, respectively. Dell at 3 percent and Sun Microsystems (Oracle) at 1 percent are the only other two that show up on the list.
Looking at the domestic manufacturers, Dawning owns the lion's share of the TOP100 market, with 34 percent of all systems. Lesser-known server makers Inspur (5 percent), Lenovo (3 percent), Sunway (3 percent), and PowerLeader (2 percent) contribute much less to this elite tier.
Two of China's largest machines were constructed by government organizations, in this case, the National University of Defense Technology (NUDT), which designed and built the top-ranked 2.5 petaflop Tianhe-1A supercomputer (which features a home-grown system interconnect), and the Chinese Academy of Sciences' Institute of Process Engineering, which developed the 207-teraflop Mole 8.5 cluster. Whether this becomes a systems development model for future machines, or it goes the more traditional route of vendor collaborations remains to be seen. But right now the Chinese government stands alone as an HPC OEM.
It's worth noting that these two government machines represent a big chunk of the FLOPS on the nation's TOP100 -- greater than the aggregate capacity contributed by Dawning and the other Chinese manufacturers, and more than twice the capacity of the US-built machines. Overall, the top supers build domestically deliver 5.052 petaflops, with the imports contributing a relatively modest 1.18 petaflops.
That skewed distribution illustrates China's broader strategy for developing its supercomputing infrastructure, that is, develop indigenous system expertise and capability and lessen its reliance on imports. That approach will eventually work its way down to the CPU level. To date, Chinese supercomputing has relied almost exclusively on chips from Intel, AMD and NVIDIA.
The big push right now is to get the domestically-designed Godson CPU technology deployed in supercomputers. Godson (aka Loongson) is a MIPS-based processor family, developed by the government-backed Institute of Computing Technology (ICT) in the Chinese Academy of Sciences. Starting in 2002, the Godson designs have slowly worked their way up the performance ladder, adding 64-bit capability in 2006. In 2007, a supercomputer with the name of KD-50-I was constructed, using 336 Godson-2F processors to deliver one teraflop of performance.
At a presentation at International Solid State Circuits Conference (ISSCC) in February, Godson lead engineer Weiwu Hu revealed that the Godson-3B will be the CPU in the upcoming 300-teraflop Dawning machine slated for installation this summer. These are 8-core chips, designed to deliver at 128 raw gigaflops at just 40 watts, and are said to rival the best US-made processors in power-efficiency and performance.
While it may take a few chip generations for the Godson processors to become a force in Chinese HPC, the country's direction has become clear: to become a major player in supercomputing from top to bottom, and do so with native capability. Liang Yuan's IDC presentation ended with a couple of predictions, namely that China intends to deploy a 10-petaflop Linpack system in the 2012 to 2103 timeframe and a 100-petaflop machine two years later.
That would almost certainly keep pace with the top systems in the US and outrun European-based machines by at least a year or two. More importantly, China appears to be determined to have a US-like presence in supercomputing, building not just a top-tier infrastructure, but an HPC industry as well. This has generated plenty of nervousness in the US HPC community, who sees its leadership threatened. A recent address by Dona Crawford, associate director for Computation at LLNL, sums the feeling rather well:
So it's not that I want to beat China per se; it's that I want us to have parity with them. I don't want to rely on them for the chip technology embedded in the supercomputers we use for national security. I don't want to rely on them for the low level software that runs my supercomputer because they figured out the parallelism before we did. I don't want to rely on them, or anyone else, for my own standard of living, for my safety and security, for the inventions that propel us forward, for open dialog and communications, all of which rely on supercomputing. I want the U.S. to be self reliant, capable and responsible for our own prosperity.
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.