Visit additional Tabor Communication Publications
February 24, 2011
More than any other country, China has used GPGPUs to propel its standing in the upper echelons of supercomputing. The 2.56 petaflop Tianhe-1A is now the top system in the world, and its 1.27 petaflops Nebulae machine is ranked third. Both are GPU-powered, and use NVIDIA Tesla parts, and to a lesser extent, Intel CPUs, to achieve their stratospheric performance levels.
But the Chinese don't intend to rely entirely on graphics chips or US-based chipmakers, for that matter, to build their next-generation HPC machines. At the International Solid State Circuits Conference (ISSCC) this week, Weiwu Hu, the lead designer of China's Godson processor (also known as Loongson), described his team's progress on their home-grown CPU, and talked about Godson's upcoming debut in a supercomputer to be deployed later this year.
Specifically, Hu told the processor-loving crowd at ISSCC that the new 8-core Godson-3B processor will be used to power a Dawning-built supercomputer slated to be booted up this summer. That system is expected to hit about 300 teraflops using 3,000 of the Godson chips, he said. If successful, the Chinese will have built their first truly indigenous TOP500 supercomputer.
Interestingly enough, it will also mark the return of a MIPS-based supercomputer to the TOP500. The Godson family is a 64-bit MIPS architecture (which just happens to include additional smarts for x86 compatibility). The last MIPS-based machine in the TOP500 was an SGI Origin 3000 machine, which fell off the list in 2005.
As one might expect from a MIPS architecture, the Godson delivers outstanding performance per watt numbers. The new Godson-3B achieves 128 gigaflops with a power-sipping 40 watts. The relatively slow clock speed (1.05 GHz) is the key to the low-energy use. In fact the 3.2 gigaflops/watt achieved by the Godson-3B CPU is even better than the 2.3 gigaflops/watt delivered by a Fermi Tesla device (M2050), although to be fair, the Tesla part also powers 3 GB of on-board memory and some other components. Nevertheless, the Godson-3B appears to be a very power-efficient design, and the upcoming Dawning machine could rival even Blue Gene/Q systems for performance per watt supremacy.
Hu also previewed their next-generation Godson-3C design, a 16-core processor that is expected to deliver 512 gigaflops -- four times that of its predecessor. Apparently most of the extra FLOPS were achieved courtesy of the process shrink to 28nm, allowing the engineers to crank the clock to 2 GHz and double the number of cores. The Godson-3C is slated for launch in 2013. According to Hu, Godson-3C will be used to power the Dawning 6000, a petascale supercomputer.
In the near-term, Godson is not likely to challenge x86 CPU dominance in HPC or anywhere else, but it may cut into future sales of Intel and AMD parts in Chinese-bound supercomputers, clusters, and servers. China, being the hottest economy on the planet, represents a lot of potential revenue for these companies.
Perhaps more worrisome would be if China were to export Godsons, or systems based on those chips, to other countries -- specifically other countries that US-based vendors are prohibited from exporting to. One that comes to mind is Iran, which has a nascent supercomputing infrastructure, and which is on friendly terms with China.
With that in mind, there were reports this week that Iran launched two supercomputers -- one at the Amirkabir University of Technology and the other at Isfahan University of Technology. The Amirkabir machine was said to attain a performance of 32 or 34 teraflops, depending upon which news publication you were reading. There was even one report that pegged the Amirkabir system at 89 teraflops, which would probably place it within the ranks of the TOP500 if the Iranians could be persuaded to submit a Linpack run. All of this is second-hand reporting, and given that there's not exactly a free press in Iran, none of these reports can really be trusted.
Not lost in translation is the fact that Iran is truly interested in developing its HPC infrastructure. Like most Middle Eastern countries with any money, Iran would like to use high performance computing to jump-start its science and engineering community. Iran also could use high-end machines to support its defense industry and help with its nuclear aspirations. The US, of course, would like to hinder both.
Not so China, which is highly motivated to counter-balance US and Western interests in the Middle East. So Godson, and the supercomputers China builds with them, may have more far-reaching consequences than the emergence of an indigenous HPC capability. If China starts distributing high-end computing parts hither and yon, the West's technological edge could be blunted significantly. Of course, given the recent upheaval in the Middle East, we really don't yet know where the ummm...chips may fall.
Posted by Michael Feldman - February 24, 2011 @ 5:57 PM, Pacific Standard Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.