Practicalities and Challenges in the Petaflops Era

By Thomas Sterling and Chirag Dekate

June 22, 2011

Every year at ISC we stop and look back at the field of HPC, which has consistently exhibited the greatest rate of change of any technology in the history of mankind. This year is particularly important as the conventional methods that have served well over the last two decades are in direct contention with the technology trends pushing us towards a new future. This is best highlighted in the context of petaflops-capable supercomputers that have become the new standard at the top end of HPC and the reemergence of Asia as a dominant player in that ethereal regime.

But what has defined this year and distinguished it from the recent past is that although the issues are clear, the future conclusions are not. Perhaps, that is the lesson: that we are in a rare state of transition the outcome of which is yet to be determined. And the debate is anything but over. Let’s consider the highlights.

Petaflop computing is now the norm worldwide with the US, Europe, and Asia all driving computation beyond 10^15 flops. Most notable was China with its deployment of Tianhe-1A exceeding 2.5 petaflops (Linpack), assuming the position of the “fastest computer in the world” in 2010. That system is now surpassed with the 8 petaflops K Computer from Japan, giving that country the top spot for the first time since the illustrious Earth-Simulator.

Asia also has significant deployment of more traditional HPC systems, providing the means for strong programs in computational science with potential long-term impact on future science and engineering disciplines. Finally, an increasing share of the integrated components in Asia is homegrown, indicating a likely future with fully native HPC systems.

This year the big debate is the future of HPC system architecture: homogeneous multicore/manycore or heterogeneous GPU-based structures. And in both cases, the issue of programming dominates. GPUs are perceived by many as the fast track to superior computing. And for some applications this has been demonstrated. Indeed, of the top four machines, three incorporate GPUs. That would suggest a clear trend. But not so fast. Of the top 500 systems, only 17 integrate GPUs as a seminal element in achieving their performance goals. That would also suggest a clear trend, but in the opposite direction.

GPUs bring an enormous combined floating-point capability in a relatively small package and at a superior power/performance envelope. The numbers are staggering, but at a cost. Sitting at the wrong end of a PCI bus, the long latencies and relatively low bandwidth demands very high data reuse and highly regular control flow to extract anything near their peak potential. And with program control residing with the general-purpose processors, the programming methods for such hybrid systems is not for the faint of heart or consistent with the mass of legacy codes upon which industry, science, and governments all rely upon and have invested in.

Thus, it is possible that such architectures as TSUBAME 2.0 are transitional in that they represent the beginnings of an empirical search that in a few years will resolve in a distinctly different system architecture, exploiting the best of both manycore and GPUs but in a balanced and well-integrated structure managed by a unified programming methodology. While many practitioners experiment, sometimes to good effect, with CUDA and the emerging OpenCL framework, many more codes and programmers remain wedded to more day-to-day productive means.

These are very exciting times but those who think they know the final answer are probably fooling themselves, if not the rest of us. After all, the new number one K supercomputer is not based on GPUs but is 3 times faster than the number two Tianhe-1A machine, which is.

The steady increase in delivered performance is also pushing the power envelope. One advantage of GPUs, when employed effectively, is a somewhat improved energy efficiency (joules/operations). But while clock rates remain relatively stable (although differing across a range of approximately 3X) the scale of the largest systems continues to grow as HPC approaches another milestone: a million cores. The tradeoff is complex, but grave concerns are warranted as the biggest machines top 10 megawatts.

This is the driving and principal constraint for ambitious projects to deliver sustained exaflops performance before the end of this decade. The International Exascale Software Project has a worldwide representation coordinating the development of a new software platform that will support exascale systems in their management and application in the next decade. Recognizing the long lead times for software and their corresponding almost prohibitive costs, the opportunity to combine investment of resources in mutually aligned directions would appear to be an essential strategy to achieve billion-way parallelism.

In the US, the DARPA sponsored UHPC program, while not expressly targeting exascale systems has initiated this year to develop suitable technologies for a petaflop in a rack at under 60 kilowatts. The European Exascale Software Initiative is to develop a roadmap to exaflops, and also in Europe, both Intel and separately, Cray, are engaged in collaborations with European researchers to drive towards exaflops. In Asia, both Japan and China have programs intended to move aggressively towards sustained exaflops for real world applications, perhaps as early as 2018. But with predictions of hundreds of megawatts required through extensions of conventional methods, what such systems will look like is far from certain, let alone how they will be programmed.

Driving the field of HPC towards new capabilities is the underlying technologies and the processor designs from which they are constructed. Intel, IBM, and AMD are all advancing their processor designs. 45 and 32 nanometer technologies are taking hold even as the number of cores per die and socket is increasing to deliver continuing increase in performance.

Intel’s Xeon E7-8870 Processor integrates 10 cores, operating at 2.7 GHz, with 30MB cache size and supporting 2 terabytes of DDR3 memory. Using Hafnium-based high-k metal gate silicon technology, the Intel chip burns 130 Watts.

Cooler is the 12-core 2.5 GHz AMD Opteron 6100 component at 45 nanometers. It draws 105 Watts and is based on their full-field EUV lithography technology. AMD plans on going to 16 cores by Q3 of this year based on 32 nanometers, while Intel is preparing their 22 nanometer Ivy-bridge processors based on 3-D TriGate transistors.

IBM’s heavy hitter continues to be the Power family with the 45 nanometer Power7 out last year, supporting a number of chip configurations between 4 and 8 cores. This will serve as the central component to the 10 petaflops Blue Waters machine to be deployed next year. Its successor, IBM Power8, is currently under development.

GPU designs continue to push the edge of the envelope in peak performance while enhancing their generality for greater utility. The NVIDIA Tesla 20-series family based on the Fermi architecture can integrate up to 512 CUDA cores with clock rates of between 1.15 and 1.4 GHz and deliver more than a half a teraflop of double precision performance. With comparable performance is the AMD FireStream 9370 series GPU based on the Cypress architecture. Both vendors are moving towards tighter system integration with AMD’s pushing its Fusion system architecture. In the software domain, it’s a head-to-head fight between CUDA and OpenCL, with strong advocates for each.

The underlying technologies are certainly not standing still. Recent graphene technology breakthroughs include UCLA reporting 300 GHz switching rates and UC Berkeley announcing new optical modulators, while IBM has implemented the first integrated circuit based on graphene transistors. 3-D stacking of dies by IBM, Xilinx, and other manufacturers is preparing HPC for higher density packaging with higher internal bandwidths and shorter latencies, while combining disparate functional components (e.g., cores, DRAM) into single integrated units.

Every year an attempt is made to capture a more meaningful representation of supercomputing based on the TOP500. The list provides extensive data but usually only discussed in terms of the highest rated machine, the lowest rated machine, and the sum of all 500 machines. But what about supercomputing for the common man; the mainstream form and capability. This year, although the top machines exhibit unique properties, the canonical system is the standard Linux commodity cluster with a peak performance of 72.4 Teraflops and a Linpack rating of 38.3 teraflops. Such a system incorporates Intel Xeon Nehalem-EP processors, integrated by IBM (HP is a close second), and interconnected with Gigabit Ethernet (InfiniBand has almost caught up). The system comprises 1,134 sockets of 6 cores each and burns 200 Kilowatts. The closest machine to this profile is number 288 on the TOP500 list.

Even though we still rate systems in teraflops, the Graph 500 list is emerging to represent a very different class of computing: data intensive processing, a domain in which the manipulation of the metadata dominates in lieu of floating point operations. Although yet to dominate, this emerging class of computing is important for many sparse problems as well as knowledge management and understanding problems that are expected to have increasing impact on the field of HPC.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

How Formula 1 Used Cloud HPC to Build the Next Generation of Racing

December 12, 2019

Formula 1, Rob Smedley explained, is maybe the biggest racing spectacle in the world, with five hundred million fans tuning in for every race. Smedley, a chief engineer with Formula 1’s performance engineering and anal Read more…

By Oliver Peckham

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has been unveiled in upstate New York that will be used by IBM Read more…

By Doug Black

At SC19: Developing a Digital Twin

December 11, 2019

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location. In such a world, there will also be a digita Read more…

By Aaron Dubrow

Supercomputers Help Predict Carbon Dioxide Levels

December 10, 2019

The Earth’s terrestrial ecosystems – its lands, forests, jungles and so on – are crucial “sinks” for atmospheric carbon, holding nearly 30 percent of our annual CO2 emissions as they breathe in the carbon-rich Read more…

By Oliver Peckham

Finally! SC19 Competitors Live and in Color!

December 10, 2019

You know the saying “better late than never”? That’s how my cluster competition coverage is faring this year. With SC19 coming late in November, quickly followed by my annual trip to South Africa to cover their clu Read more…

By Dan Olds

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

GPU Scheduling and Resource Accounting: The Key to an Efficient AI Data Center

[Connect with LSF users and learn new skills in the IBM Spectrum LSF User Community!]

GPUs are the new CPUs

GPUs have become a staple technology in modern HPC and AI data centers. Read more…

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

December 9, 2019

Intel today introduced the ‘first-of-its-kind’ cryo-controller chip for quantum computing and previewed a cryo-prober tool for characterizing quantum processor chips. The new controller is a mixed-signal SoC named Ho Read more…

By John Russell

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has Read more…

By Doug Black

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

December 9, 2019

Intel today introduced the ‘first-of-its-kind’ cryo-controller chip for quantum computing and previewed a cryo-prober tool for characterizing quantum proces Read more…

By John Russell

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has n Read more…

By Doug Black

Ride on the Wild Side – Squyres SC19 Mars Rovers Keynote

December 2, 2019

Reminding us of the deep and enabling connection between HPC and modern science is an important part of the SC Conference mission. And yes, HPC is a science its Read more…

By John Russell

NSCI Update – Adapting to a Changing Landscape

December 2, 2019

It was November of 2017 when we last visited the topic of the National Strategic Computing Initiative (NSCI). As you will recall, the NSCI was started with an Executive Order (E.O. No. 13702), that was issued by President Obama in July of 2015 and was followed by a Strategic Plan that was released in July of 2016. The question for November of 2017... Read more…

By Alex R. Larzelere

Tsinghua University Racks Up Its Ninth Student Cluster Championship Win at SC19

November 27, 2019

Tsinghua University has done it again. At SC19 last week, the eight-time gold medal-winner team took home the top prize in the 2019 Student Cluster Competition Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
CEJN
CJEN
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This