Chinese Super Breaks World Record in Application Performance

By Michael Feldman

June 9, 2011

In case you were wondering if these new-fangled Chinese GPU-powered supercomputers can do anything useful, Thursday’s announcement about the latest exploits of the Tianhe-1A system should give you some idea of the significance of these petascale beasts. On Thursday, researchers from the Chinese Academy of Sciences’ Institute of Process Engineering (CAS-IPE) claimed to have run a molecular simulation code at 1.87 petaflops — the highest floating point performance ever achieved by a real-world application code. The simulation is being used to help discern the behavior of crystalline silicon, a material used in solar panels and semiconductors.

According to NVIDIA, the application used just 2,000 lines of CUDA to accelerate the simulation — not an inconsequential amount of source code, but considering the result, a pretty impressive ROI. In addition, all the reported FLOPS for this application were attributed to GPUs, in this case, 7,168 of them. The three-hour simulation modeled the behavior of 110 billion atoms, beating out the previous record for a molecular simulation code, which modeled 49 billion atoms at 369 teraflops. The latter was performed on Roadrunner, the original petaflop super, accelerated by IBM’s souped up Cell processors, the PowerXCell 8i.

The 1.87 petaflop performance is quite an achievement for the top-ranked Tianhe-1A, especially considering the current number two system, the CPU-only Jaguar at Oak Ridge Lab, manages just 1.76 petaflops on Linpack, an artificial benchmark designed to show off a system’s floating point muscles. In 2008, Jaguar delivered it own sustained petaflop for a real-world application, in this case a superconductor simulation code, which hit 1.35 petaflops*. That work nabbed the application team at Oak Ridge the Gordon Bell Prize that year.

Whether the CAS-IPE team wins any trophies for its molecular simulation application remains to be seen. The researchers will be presenting their work at the upcoming the NVIDIA GPU Technology Conference (GTC) in December in Beijing, and also next May in San Jose, California at the US GTC event.

Over and above the impressive FLOPS is the larger significance of using the technology to propel science and engineering forward. Last year, NVIDIA Tesla GM Andy Keane, penned an opinion piece warning that the lagging adoption of GPU in HPC could threaten the country’s competitive edge. While that editorial could easily be construed as self-serving for his employer’s interests, the fact is that the US and Europe have lagged countries like China and Japan in adopting this technology for their most elite systems. Those nations saw the revamped graphics chip as the most economical path to petascale machines.

Of course, there are valid reasons to be wary GPU computing for HPC — programmability difficulties, over-hyping of performance, proprietary software, etc. — leading many in the HPC community to be extra careful about adopting the technology. But the negative backwash from the original flood of hype can be as ill-informed as the initial exaggerations. In the current issue of HPCwire, Stone Ridge Technology CEO and GPU enthusiast Vincent Natoli, offers a nice set of rebuttals to the major objections to GPU computing. If you’re a GPGPU fence-sitter, it’s definitely worth a read.

Beyond the significance of GPU usage, the application work demonstrates that the Chinese are not just building these big machines for national prestige. Simulations such as these support basic science research that can be applied to designing and manufacturing better solar energy panels and semiconductor devices. These types of high-tech commercial applications are exactly what the US and other industrialized countries envision as the basis for their future economic growth, and their ability to compete in the global marketplace.

In that sense, even though today’s announcement won’t appear on the front page of the New York Times, as did the Tianhe-1A TOP500 news, this development is arguably much more significant.

It’s also best to see this achievement in the larger context of what the Chinese scientific community is doing. A recent article in Forbes points out that China is quickly catching up to US in scientific output, and in some cases surpassing it:

In 2009, for the first time, Chinese researchers published more papers in information technology than those in the U.S., with both countries churning out more than 100,000 info-tech publications. In clean and alternative energy, Chinese researchers have likewise been publishing up a storm, not surpassing U.S. researchers but coming close.

The bottom line is that the US is in danger of losing its technological edge, which it has basically enjoyed, unchallenged, since the end of World War II. It’s not that GPU computing is the magic bullet here. But news like this should be a wake-up call to American HPC’ers and policy-makers that sometimes being extra careful is the riskiest proposition of them all.

*The same superconductor simulation subsequently achieved 1.9 petaflops on the upgraded Jaguar supercomputer.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Researchers Test AI Traffic Monitoring Tool in Austin

December 13, 2017

Traffic jams and mishaps are often painful and sometimes dangerous facts of life. At this week’s IEEE International Conference on Big Data being held in Boston, researchers from TACC and colleagues will present a new Read more…

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in what has become an overwhelmingly two-socket landscape in the d Read more…

By John Russell

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as tech giants jockey to establish a pole position in the race toward commercialization of quantum. This week, Microsoft took the next step in advanci Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Explore the Origins of Space with COSMOS and Memory-Driven Computing

From the formation of black holes to the origins of space, data is the key to unlocking the secrets of the early universe. Read more…

ESnet Now Moving More Than 1 Petabyte/wk

December 12, 2017

Optimizing ESnet (Energy Sciences Network), the world's fastest network for science, is an ongoing process. Recently a two-year collaboration by ESnet users – the Petascale DTN Project – achieved its ambitious goal t Read more…

AMD Wins Another: Baidu to Deploy EPYC on Single Socket Servers

December 13, 2017

When AMD introduced its EPYC chip line in June, the company said a portion of the line was specifically designed to re-invigorate a single socket segment in wha Read more…

By John Russell

Microsoft Wants to Speed Quantum Development

December 12, 2017

Quantum computing continues to make headlines in what remains of 2017 as tech giants jockey to establish a pole position in the race toward commercialization of Read more…

By Tiffany Trader

HPC Iron, Soft, Data, People – It Takes an Ecosystem!

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be care Read more…

By Alex R. Larzelere

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Microsoft Spins Cycle Computing into Core Azure Product

December 5, 2017

Last August, cloud giant Microsoft acquired HPC cloud orchestration pioneer Cycle Computing. Since then the focus has been on integrating Cycle’s organization Read more…

By John Russell

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPE In-Memory Platform Comes to COSMOS

November 30, 2017

Hewlett Packard Enterprise is on a mission to accelerate space research. In August, it sent the first commercial-off-the-shelf HPC system into space for testing Read more…

By Tiffany Trader

SC17 Cluster Competition: Who Won and Why? Results Analyzed and Over-Analyzed

November 28, 2017

Everyone by now knows that Nanyang Technological University of Singapore (NTU) took home the highest LINPACK Award and the Overall Championship from the recently concluded SC17 Student Cluster Competition. We also already know how the teams did in the Highest LINPACK and Highest HPCG competitions, with Nanyang grabbing bragging rights for both benchmarks. Read more…

By Dan Olds

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This