Practicalities and Challenges in the Petaflops Era

By Thomas Sterling and Chirag Dekate

June 22, 2011

Every year at ISC we stop and look back at the field of HPC, which has consistently exhibited the greatest rate of change of any technology in the history of mankind. This year is particularly important as the conventional methods that have served well over the last two decades are in direct contention with the technology trends pushing us towards a new future. This is best highlighted in the context of petaflops-capable supercomputers that have become the new standard at the top end of HPC and the reemergence of Asia as a dominant player in that ethereal regime.

But what has defined this year and distinguished it from the recent past is that although the issues are clear, the future conclusions are not. Perhaps, that is the lesson: that we are in a rare state of transition the outcome of which is yet to be determined. And the debate is anything but over. Let’s consider the highlights.

Petaflop computing is now the norm worldwide with the US, Europe, and Asia all driving computation beyond 10^15 flops. Most notable was China with its deployment of Tianhe-1A exceeding 2.5 petaflops (Linpack), assuming the position of the “fastest computer in the world” in 2010. That system is now surpassed with the 8 petaflops K Computer from Japan, giving that country the top spot for the first time since the illustrious Earth-Simulator.

Asia also has significant deployment of more traditional HPC systems, providing the means for strong programs in computational science with potential long-term impact on future science and engineering disciplines. Finally, an increasing share of the integrated components in Asia is homegrown, indicating a likely future with fully native HPC systems.

This year the big debate is the future of HPC system architecture: homogeneous multicore/manycore or heterogeneous GPU-based structures. And in both cases, the issue of programming dominates. GPUs are perceived by many as the fast track to superior computing. And for some applications this has been demonstrated. Indeed, of the top four machines, three incorporate GPUs. That would suggest a clear trend. But not so fast. Of the top 500 systems, only 17 integrate GPUs as a seminal element in achieving their performance goals. That would also suggest a clear trend, but in the opposite direction.

GPUs bring an enormous combined floating-point capability in a relatively small package and at a superior power/performance envelope. The numbers are staggering, but at a cost. Sitting at the wrong end of a PCI bus, the long latencies and relatively low bandwidth demands very high data reuse and highly regular control flow to extract anything near their peak potential. And with program control residing with the general-purpose processors, the programming methods for such hybrid systems is not for the faint of heart or consistent with the mass of legacy codes upon which industry, science, and governments all rely upon and have invested in.

Thus, it is possible that such architectures as TSUBAME 2.0 are transitional in that they represent the beginnings of an empirical search that in a few years will resolve in a distinctly different system architecture, exploiting the best of both manycore and GPUs but in a balanced and well-integrated structure managed by a unified programming methodology. While many practitioners experiment, sometimes to good effect, with CUDA and the emerging OpenCL framework, many more codes and programmers remain wedded to more day-to-day productive means.

These are very exciting times but those who think they know the final answer are probably fooling themselves, if not the rest of us. After all, the new number one K supercomputer is not based on GPUs but is 3 times faster than the number two Tianhe-1A machine, which is.

The steady increase in delivered performance is also pushing the power envelope. One advantage of GPUs, when employed effectively, is a somewhat improved energy efficiency (joules/operations). But while clock rates remain relatively stable (although differing across a range of approximately 3X) the scale of the largest systems continues to grow as HPC approaches another milestone: a million cores. The tradeoff is complex, but grave concerns are warranted as the biggest machines top 10 megawatts.

This is the driving and principal constraint for ambitious projects to deliver sustained exaflops performance before the end of this decade. The International Exascale Software Project has a worldwide representation coordinating the development of a new software platform that will support exascale systems in their management and application in the next decade. Recognizing the long lead times for software and their corresponding almost prohibitive costs, the opportunity to combine investment of resources in mutually aligned directions would appear to be an essential strategy to achieve billion-way parallelism.

In the US, the DARPA sponsored UHPC program, while not expressly targeting exascale systems has initiated this year to develop suitable technologies for a petaflop in a rack at under 60 kilowatts. The European Exascale Software Initiative is to develop a roadmap to exaflops, and also in Europe, both Intel and separately, Cray, are engaged in collaborations with European researchers to drive towards exaflops. In Asia, both Japan and China have programs intended to move aggressively towards sustained exaflops for real world applications, perhaps as early as 2018. But with predictions of hundreds of megawatts required through extensions of conventional methods, what such systems will look like is far from certain, let alone how they will be programmed.

Driving the field of HPC towards new capabilities is the underlying technologies and the processor designs from which they are constructed. Intel, IBM, and AMD are all advancing their processor designs. 45 and 32 nanometer technologies are taking hold even as the number of cores per die and socket is increasing to deliver continuing increase in performance.

Intel’s Xeon E7-8870 Processor integrates 10 cores, operating at 2.7 GHz, with 30MB cache size and supporting 2 terabytes of DDR3 memory. Using Hafnium-based high-k metal gate silicon technology, the Intel chip burns 130 Watts.

Cooler is the 12-core 2.5 GHz AMD Opteron 6100 component at 45 nanometers. It draws 105 Watts and is based on their full-field EUV lithography technology. AMD plans on going to 16 cores by Q3 of this year based on 32 nanometers, while Intel is preparing their 22 nanometer Ivy-bridge processors based on 3-D TriGate transistors.

IBM’s heavy hitter continues to be the Power family with the 45 nanometer Power7 out last year, supporting a number of chip configurations between 4 and 8 cores. This will serve as the central component to the 10 petaflops Blue Waters machine to be deployed next year. Its successor, IBM Power8, is currently under development.

GPU designs continue to push the edge of the envelope in peak performance while enhancing their generality for greater utility. The NVIDIA Tesla 20-series family based on the Fermi architecture can integrate up to 512 CUDA cores with clock rates of between 1.15 and 1.4 GHz and deliver more than a half a teraflop of double precision performance. With comparable performance is the AMD FireStream 9370 series GPU based on the Cypress architecture. Both vendors are moving towards tighter system integration with AMD’s pushing its Fusion system architecture. In the software domain, it’s a head-to-head fight between CUDA and OpenCL, with strong advocates for each.

The underlying technologies are certainly not standing still. Recent graphene technology breakthroughs include UCLA reporting 300 GHz switching rates and UC Berkeley announcing new optical modulators, while IBM has implemented the first integrated circuit based on graphene transistors. 3-D stacking of dies by IBM, Xilinx, and other manufacturers is preparing HPC for higher density packaging with higher internal bandwidths and shorter latencies, while combining disparate functional components (e.g., cores, DRAM) into single integrated units.

Every year an attempt is made to capture a more meaningful representation of supercomputing based on the TOP500. The list provides extensive data but usually only discussed in terms of the highest rated machine, the lowest rated machine, and the sum of all 500 machines. But what about supercomputing for the common man; the mainstream form and capability. This year, although the top machines exhibit unique properties, the canonical system is the standard Linux commodity cluster with a peak performance of 72.4 Teraflops and a Linpack rating of 38.3 teraflops. Such a system incorporates Intel Xeon Nehalem-EP processors, integrated by IBM (HP is a close second), and interconnected with Gigabit Ethernet (InfiniBand has almost caught up). The system comprises 1,134 sockets of 6 cores each and burns 200 Kilowatts. The closest machine to this profile is number 288 on the TOP500 list.

Even though we still rate systems in teraflops, the Graph 500 list is emerging to represent a very different class of computing: data intensive processing, a domain in which the manipulation of the metadata dominates in lieu of floating point operations. Although yet to dominate, this emerging class of computing is important for many sparse problems as well as knowledge management and understanding problems that are expected to have increasing impact on the field of HPC.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

From Exasperation to Exascale: HPE’s Nic Dubé on Frontier’s Untold Story

December 2, 2022

The Frontier supercomputer – still fresh off its chart-topping 1.1 Linpack exaflops run and maintaining its number-one spot on the Top500 list – was still very much in the spotlight at SC22 in Dallas last month. Six Read more…

At SC22, Carbon Emissions and Energy Costs Eclipsed Hardware Efficiency

December 2, 2022

The race to ever-better flops-per-watt and power usage effectiveness (PUE) has, historically, dominated the conversation over sustainability in HPC – but at SC22, held last month in Dallas, something felt different. Ac Read more…

HPC Career Notes: December 2022 Edition

December 1, 2022

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it’s a promotion, new company hire, or even an accolade, we’ Read more…

IBM Quantum Summit: Osprey Flies; Error Handling Progress; Quantum-centric Supercomputing

December 1, 2022

Part scorecard, part grand vision, IBM’s annual Quantum Summit held last month is a fascinating snapshot of IBM’s progress, evolving technology roadmap, and issues facing the quantum landscape broadly. Thankfully, IB Read more…

AWS Introduces a Flurry of New EC2 Instances at re:Invent

November 30, 2022

AWS has announced three new Amazon Elastic Compute Cloud (Amazon EC2) instances powered by AWS-designed chips, as well as several new Intel-powered instances – including ones targeting HPC – at its AWS re:Invent 2022 Read more…

AWS Solution Channel

Shutterstock 110419589

Thank you for visiting AWS at SC22

Accelerate high performance computing (HPC) solutions with AWS. We make extreme-scale compute possible so that you can solve some of the world’s toughest environmental, social, health, and scientific challenges. Read more…

 

shutterstock_1431394361

AI and the need for purpose-built cloud infrastructure

Modern AI solutions augment human understanding, preferences, intent, and even spoken language. AI improves our knowledge and understanding by delivering faster, more informed insights that fuel transformation beyond anything previously imagined. Read more…

Quantum Riches and Hardware Diversity Are Discouraging Collaboration

November 28, 2022

Quantum computing is viewed as a technology for generations, and the spoils for the winners are huge, but the diversity of technology is discouraging collaboration, an Intel executive said last week. There are close t Read more…

From Exasperation to Exascale: HPE’s Nic Dubé on Frontier’s Untold Story

December 2, 2022

The Frontier supercomputer – still fresh off its chart-topping 1.1 Linpack exaflops run and maintaining its number-one spot on the Top500 list – was still v Read more…

At SC22, Carbon Emissions and Energy Costs Eclipsed Hardware Efficiency

December 2, 2022

The race to ever-better flops-per-watt and power usage effectiveness (PUE) has, historically, dominated the conversation over sustainability in HPC – but at S Read more…

HPC Career Notes: December 2022 Edition

December 1, 2022

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high-performance computing community. Whether it Read more…

IBM Quantum Summit: Osprey Flies; Error Handling Progress; Quantum-centric Supercomputing

December 1, 2022

Part scorecard, part grand vision, IBM’s annual Quantum Summit held last month is a fascinating snapshot of IBM’s progress, evolving technology roadmap, and Read more…

AWS Introduces a Flurry of New EC2 Instances at re:Invent

November 30, 2022

AWS has announced three new Amazon Elastic Compute Cloud (Amazon EC2) instances powered by AWS-designed chips, as well as several new Intel-powered instances Read more…

Quantum Riches and Hardware Diversity Are Discouraging Collaboration

November 28, 2022

Quantum computing is viewed as a technology for generations, and the spoils for the winners are huge, but the diversity of technology is discouraging collaborat Read more…

2022 HPC Road Trip: Los Alamos

November 23, 2022

With SC22 in the rearview mirror, it’s time to get back to the 2022 Great American Supercomputing Road Trip. To refresh everyone’s memory, I jumped in the c Read more…

QuEra’s Quest: Build a Flexible Neutral Atom-based Quantum Computer

November 23, 2022

Last month, QuEra Computing began providing access to its 256-qubit, neutral atom-based quantum system, Aquila, from Amazon Braket. Founded in 2018, and built o Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

RISC-V Is Far from Being an Alternative to x86 and Arm in HPC

November 18, 2022

One of the original RISC-V designers this week boldly predicted that the open architecture will surpass rival chip architectures in performance. "The prediction is two or three years we'll be surpassing your architectures and available performance with... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

AMD Thrives in Servers amid Intel Restructuring, Layoffs

November 12, 2022

Chipmakers regularly indulge in a game of brinkmanship, with an example being Intel and AMD trying to upstage one another with server chip launches this week. But each of those companies are in different positions, with AMD playing its traditional role of a scrappy underdog trying to unseat the behemoth Intel... Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

JPMorgan Chase Bets Big on Quantum Computing

October 12, 2022

Most talk about quantum computing today, at least in HPC circles, focuses on advancing technology and the hurdles that remain. There are plenty of the latter. F Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Leading Solution Providers

Contributors

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

September 8, 2022

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

SC22 Unveils ACM Gordon Bell Prize Finalists

August 12, 2022

Courtesy of the schedule for the SC22 conference, we now have our first glimpse at the finalists for this year’s coveted Gordon Bell Prize. The Gordon Bell Pr Read more…

Intel Is Opening up Its Chip Factories to Academia

October 6, 2022

Intel is opening up its fabs for academic institutions so researchers can get their hands on physical versions of its chips, with the end goal of boosting semic Read more…

AMD’s Genoa CPUs Offer Up to 96 5nm Cores Across 12 Chiplets

November 10, 2022

AMD’s fourth-generation Epyc processor line has arrived, starting with the “general-purpose” architecture, called “Genoa,” the successor to third-gen Eypc Milan, which debuted in March of last year. At a launch event held today in San Francisco, AMD announced the general availability of the latest Epyc CPUs with up to 96 TSMC 5nm Zen 4 cores... Read more…

AMD Previews 400 Gig Adaptive SmartNIC SOC at Hot Chips

August 24, 2022

Fresh from finalizing its acquisitions of FPGA provider Xilinx (Feb. 2022) and DPU provider Pensando (May 2022) ), AMD previewed what it calls a 400 Gig Adaptive smartNIC SOC yesterday at Hot Chips. It is another contender in the increasingly crowded and blurry smartNIC/DPU space where distinguishing between the two isn’t always easy. The motivation for these device types... Read more…

Google Program to Free Chips Boosts University Semiconductor Design

August 11, 2022

A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire