IEEE Conference Keynoters Lay Out Path to Exascale Computing

By Aaron Dubrow

October 5, 2011

The challenges of exascale computing were the main focus of the three keynote addresses at the IEEE Cluster 2011 conference hosted in Austin, Texas from September 26 through 30. The speakers, renowned leaders in cluster computing, described the obstacles and opportunities involved in building systems one thousand times more powerful than today’s petascale supercomputers. Speaking from the perspective of the software developer (Thomas Sterling), the cluster designer (Liu GuangMing) and the chip architect (Charles Moore), each presented their thoughts on what is needed to reach exascale.

Thomas Sterling, Indiana University, Center for Research in Extreme Scale Technologies (CREST)

With a confidence born from long experience, Thomas Sterling, father of Beowulf, industry veteran, and associate director of the Center for Research in Extreme Scale Technologies (CREST) at Indiana University, kicked off the conference on Tuesday with a keynote on the need for a new paradigm in programming that will be adaptive, intelligent, asynchronous and able to get significantly better performance than today’s execution model.

Before jumping into an explanation of the new programming model, Sterling presented an eccentric history of cluster computing from the MIT Whirlwind project in the 1950s to Norbert Weiner’s cybernetic systems through the Beowulf era, where commodity PCs were first harnessed together to build a powerful cluster, to today’s petaflop mega-machines, one million times faster than the first Beowulf cluster.

Throughout the various phases of supercomputing innovation, several different programming paradigms have emerged, Sterling explained, from serial execution to vector processing to SIMD, to today’s dominant model, which uses MPI (Message Passing Interface) to communicate among many cores.

“Clusters will go through another metamorphosis,” Sterling predicted, adding, “commodity clusters will survive paradigm shifts.”

Current trends suggest the trajectory for computing speed is leveling. Sterling identified a number of problems that may prevent technologists from developing large systems. Power and reliability will be challenging, but Sterling sees the programming model as the biggest obstacle.

In the synchronous model represented by MPI, calculations need to be performed in a specific order, and with precision, to minimize latency, a dance that is difficult to keep up with. Only a handful of codes can run on the hundreds of thousands of cores that are available on today’s large supercomputers. Exascale computers, which Sterling said he hopes to see by the end of the decade, will likely have millions of cores.  At this level of core count, the component reliability and synchronization costs cannot accommodate the usual data-parallel computing approach.

“We must manage asynchrony to allow computing to be self-adaptive,” he said.

As an analogy, he pointed to the difference between a guided missile and a cannon. MPI represents an uncontrolled, ballistic, brute force method to solve problems. The new paradigm, or “experimental execution model” presented by Sterling, is exemplified by his own project, the ParalleX Research Group.

“ParalleX is an abstract test bed to explore the synthesis of ideas for current and extreme scale applications,” Sterling said. “We want to bring strong scaled applications back into the cluster world.”

His software employs micro-checkpointing: ephemeral detection and correction on the fly, and introspection (a kind of machine learning) closing the loop, as in cybernetics, to constantly adjust like the guided missile. It also manages asynchrony by “constraint-based synchronization.”

“You don’t want to tell the program when to do the tasks,” Sterling said. “You want to tell the program the conditions under which the task can be done. This allows the program to decide on its own when to undertake a given task.”

He pointed to initial performance gains for porting the adaptive mesh refinement algorithm for astrophysics to work on ParalleX execution. Results showed an improvement in performance of two to three times by changing the underlying context from MPI to ParalleX.

Some of these same goals are being pursued in a few significant, but not particularly well-known programming experiments, according to Sterling. In addition to ParalleX, he discussed examples from the StarsS project at the Barcelona Supercomputing Center, which employ a new model for data flow executions, and the SWift Adaptive Runtime Machine (SWARM) by ET International.

These execution models may not yet provide optimal computing, Sterling admitted, but the solutions being developed are needed for the community to advance.

“Cluster computing is going through a phase transition,” he asserted. “It will take leadership in this new paradigm shift and it will be the medium where a new paradigm is manifested. “

The tools are open source and XPI, the API for the execution environment, is in alpha testing and available to friendly users. It will be released soon to the general public.

Liu GuangMing, Director, National Supercomputer Center, Tianjin, China

Liu GuangMing, the designer of Tianhe-1A — China’s most powerful supercomputer and the second most powerful in the world — began his Wednesday keynote with an overview of the system deployed at the National Supercomputer Center in Tianjin, China.  He followed with an analysis of the barriers that designers face in building an exascale system.

Built from 143,336 Intel CPU processors, 7168 NVIDIA GPUs, and 2048 Galaxy FT-1000 eight-core processors designed by Liu himself, Tianhe-1A has a peak performance of 2.56 petaflops. The hybrid cluster is comprised largely of commodity parts; however, a few of the components, including the interconnects and FT chips, are proprietary.

“To get to the petascale, you can choose a traditional design or a new design,” Liu said. “We have been looking for a new way to design and implement a petaflop supercomputer.”

When it was deployed in 2010, many in the HPC world questioned Tianhe-1A’s ability to run scientific applications efficiently. Liu described a broad range of problems that used thousands to hundreds of thousands of processors with great efficiency, from seismic imaging for petroleum exploration to decoding the genome of the E. coli bacteria that sickened thousands in Germany. These results were delivered and put to bed some of the questions about Tianhe-1A’s usability.

After describing the technological and scientific successes of Tianhe-1A, Liu transitioned to a discussion of the problems associated with future exascale systems. He divided the problems into five categories: power, memory, communication, reliability, and application scalability, and quantified each problem with mathematical models.

Literally.

Transforming each of the main challenges into equations, he described how the models depict the obstacles facing continued speedups. The goal of this endeavor was to “build a synthesized speedup model and define quantitatively the ‘walls’,” Liu said.

He went on to suggest potential ways over each wall, sometimes through concerted effort by the HPC community, sometimes through emerging innovations.

Liu also showed enthusiasm for untested, emerging technologies such as optical or wireless interconnects, nanoelectronics and quantum and DNA computing, all of which he expects to play a role in the evolution of new systems. He pointed to the high-speed 3D interconnects associated with the Cray XT5 and Fujitsu K computer systems as examples of current technologies that he believes are on the right path to reaching the exascale.

Liu also gave examples of instances where the community must do a better job of optimizing applications for larger systems. Speaking about computer memory, he classified six types of data access that must be considered when speeding-up and scaling-up applications to tens of thousands of cores.

“Traditional optimization techniques usually consider only some of these characteristics,” Liu said. “We must consider all six characteristics and create a harmonious optimization algorithm.”

This holistic, deep thinking about the interrelationship of various levels of computation were the main message of Liu’s presentation. He repeatedly returned to graphs that showed the impact of various processes, from memory access and communication, to power consumption and cost, on the overall time and efficiency of computation.

“To reach the exascale, we must research solutions at all system levels,” Liu concluded.

Charles Moore, Corporate Fellow and the Technology Group CTO, Advanced Micro Devices

Reaching exascale was the subtext of Charles Moore’s Thursday keynote at IEEE Cluster 2011, but AMD’s emerging line of accelerated processing units (APUs) was the real subject of his talk.

APUs are a class of chip that Moore believes will power future exascale systems. According to Moore, exascale systems will achieve their massive speedup by using both CPUs and GPUs or other accelerators.

“We are approaching what we at AMD call the heterogeneous systems era,” Moore said. That alone is not groundbreaking; what is important is the fact but for AMD, these cores will all be located on the same chip.

Among the chips discussed by Moore were the “Brazos” E-series Fusion APU, which contains dual cores, dual GPUs, and a video accelerator on a single chip. It achieves 90 gigaflops of single-precision performance using just 18W TDP. “Desna,” Brazos’ little cousin, runs on only 6W, and is suitable for passively cooled designs like tablets. “Llano,” AMD’s higher-end chip, will have four CPU cores, advanced GPUs, and will offer 500 gigaflops of compute power per node.

One advantage of AMD’s new line is that you “can use this chip for graphics or as a compute offload or both at the same time,” Moore said.

The powerful chips that Moore prophesied won’t quite take us to the exascale, but they will get us most of the way, he said. For exascale, an overhaul of the memory architecture and programming models is needed.

Moore alluded to 3D stacked memory being developed by AMD as a possible technological solution to memory access problems. He also described the new AMD Fusion system architecture, where the goal is “making the GPU a first class citizen in the system architecture.”

The Fusion system architecture itself is “agnostic for CPU and GPU.”  “We’ll add other accelerators to this frame in the future,” Moore said. “It’s not just about GPUs, it’s about heterogeneous computing in general.”

Openness was a common theme in the last part of Moore’s talk where he described AMD’s long-standing dedication to open source software and standards. He discussed emerging standards including HyperShare, the Open Compute Project, and the Common Communication Interface, which he believes will play key roles in getting to exascale.

“Open standards are the basis for large ecosystems,” he said. “If you look over time, open standards always win.”

Looking beyond the next-generation of chips, Moore described the potential for an “awesome exascale-class” 10-teraflop x86 APU computing node feasible in the 2018 timeframe.

“We intend to make the unprecedented processing capability of the APU as accessible to programmers as the CPU is today.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

2022 Road Trip: NASA Ames Takes Off

November 25, 2022

I left Dallas very early Friday morning after the conclusion of SC22. I had a race with the devil to get from Dallas to Mountain View, Calif., by Sunday. According to Google Maps, this 1,957 mile jaunt would be the longe Read more…

2022 Road Trip: Sandia Brain Trust Sounds Off

November 24, 2022

As the 2022 Great American Supercomputing Road Trip carries on, it’s Sandia’s turn. It was a bright sunny day when I rolled into Albuquerque after a high-speed run from Los Alamos National Laboratory. My interview su Read more…

2022 HPC Road Trip: Los Alamos

November 23, 2022

With SC22 in the rearview mirror, it’s time to get back to the 2022 Great American Supercomputing Road Trip. To refresh everyone’s memory, I jumped in the car on November 3rd and headed towards SC22 in Dallas, stoppi Read more…

Chipmakers Looking at New Architecture to Drive Computing Ahead

November 23, 2022

The ability to scale current computing designs is reaching a breaking point, and chipmakers such as Intel, Qualcomm and AMD are putting their brains together on an alternate architecture to push computing forward. The chipmakers are coalescing around the new concept of sparse computing, which involves bringing computing to data... Read more…

QuEra’s Quest: Build a Flexible Neutral Atom-based Quantum Computer

November 23, 2022

Last month, QuEra Computing began providing access to its 256-qubit, neutral atom-based quantum system, Aquila, from Amazon Braket. Founded in 2018, and built on technology developed at Harvard and MIT, QuEra, is one of Read more…

AWS Solution Channel

Shutterstock 1648511269

Avoid overspending with AWS Batch using a serverless cost guardian monitoring architecture

Pay-as-you-go resources are a compelling but daunting concept for budget conscious research customers. Uncertainty of cloud costs is a barrier-to-entry for most, and having near real-time cost visibility is critical. Read more…

 

shutterstock_1431394361

AI and the need for purpose-built cloud infrastructure

Modern AI solutions augment human understanding, preferences, intent, and even spoken language. AI improves our knowledge and understanding by delivering faster, more informed insights that fuel transformation beyond anything previously imagined. Read more…

SC22’s ‘HPC Accelerates’ Plenary Stresses Need for Collaboration

November 21, 2022

Every year, SC has a theme. For SC22 – held last week in Dallas – it was “HPC Accelerates”: a theme that conference chair Candace Culhane said reflected “how supercomputing is continuously changing the world by Read more…

Chipmakers Looking at New Architecture to Drive Computing Ahead

November 23, 2022

The ability to scale current computing designs is reaching a breaking point, and chipmakers such as Intel, Qualcomm and AMD are putting their brains together on an alternate architecture to push computing forward. The chipmakers are coalescing around the new concept of sparse computing, which involves bringing computing to data... Read more…

QuEra’s Quest: Build a Flexible Neutral Atom-based Quantum Computer

November 23, 2022

Last month, QuEra Computing began providing access to its 256-qubit, neutral atom-based quantum system, Aquila, from Amazon Braket. Founded in 2018, and built o Read more…

SC22’s ‘HPC Accelerates’ Plenary Stresses Need for Collaboration

November 21, 2022

Every year, SC has a theme. For SC22 – held last week in Dallas – it was “HPC Accelerates”: a theme that conference chair Candace Culhane said reflected Read more…

Quantum – Are We There (or Close) Yet? No, Says the Panel

November 19, 2022

For all of its politeness, a fascinating panel on the last day of SC22 – Quantum Computing: A Future for HPC Acceleration? – mostly served to illustrate the Read more…

RISC-V Is Far from Being an Alternative to x86 and Arm in HPC

November 18, 2022

One of the original RISC-V designers this week boldly predicted that the open architecture will surpass rival chip architectures in performance. "The prediction is two or three years we'll be surpassing your architectures and available performance with... Read more…

Gordon Bell Special Prize Goes to LLM-Based Covid Variant Prediction

November 17, 2022

For three years running, ACM has awarded not only its long-standing Gordon Bell Prize (read more about this year’s winner here!) but also its Gordon Bell Spec Read more…

2022 Gordon Bell Prize Goes to Plasma Accelerator Research

November 17, 2022

At the awards ceremony at SC22 in Dallas today, ACM awarded the 2022 ACM Gordon Bell Prize to a team of researchers who used four major supercomputers – inclu Read more…

Gordon Bell Nominee Used LLMs, HPC, Cerebras CS-2 to Predict Covid Variants

November 17, 2022

Large language models (LLMs) have taken the tech world by storm over the past couple of years, dominating headlines with their ability to generate convincing hu Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

RISC-V Is Far from Being an Alternative to x86 and Arm in HPC

November 18, 2022

One of the original RISC-V designers this week boldly predicted that the open architecture will surpass rival chip architectures in performance. "The prediction is two or three years we'll be surpassing your architectures and available performance with... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

AMD Thrives in Servers amid Intel Restructuring, Layoffs

November 12, 2022

Chipmakers regularly indulge in a game of brinkmanship, with an example being Intel and AMD trying to upstage one another with server chip launches this week. But each of those companies are in different positions, with AMD playing its traditional role of a scrappy underdog trying to unseat the behemoth Intel... Read more…

JPMorgan Chase Bets Big on Quantum Computing

October 12, 2022

Most talk about quantum computing today, at least in HPC circles, focuses on advancing technology and the hurdles that remain. There are plenty of the latter. F Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Leading Solution Providers

Contributors

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

September 8, 2022

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

SC22 Unveils ACM Gordon Bell Prize Finalists

August 12, 2022

Courtesy of the schedule for the SC22 conference, we now have our first glimpse at the finalists for this year’s coveted Gordon Bell Prize. The Gordon Bell Pr Read more…

Intel Is Opening up Its Chip Factories to Academia

October 6, 2022

Intel is opening up its fabs for academic institutions so researchers can get their hands on physical versions of its chips, with the end goal of boosting semic Read more…

AMD Previews 400 Gig Adaptive SmartNIC SOC at Hot Chips

August 24, 2022

Fresh from finalizing its acquisitions of FPGA provider Xilinx (Feb. 2022) and DPU provider Pensando (May 2022) ), AMD previewed what it calls a 400 Gig Adaptive smartNIC SOC yesterday at Hot Chips. It is another contender in the increasingly crowded and blurry smartNIC/DPU space where distinguishing between the two isn’t always easy. The motivation for these device types... Read more…

Google Program to Free Chips Boosts University Semiconductor Design

August 11, 2022

A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…

AMD’s Genoa CPUs Offer Up to 96 5nm Cores Across 12 Chiplets

November 10, 2022

AMD’s fourth-generation Epyc processor line has arrived, starting with the “general-purpose” architecture, called “Genoa,” the successor to third-gen Eypc Milan, which debuted in March of last year. At a launch event held today in San Francisco, AMD announced the general availability of the latest Epyc CPUs with up to 96 TSMC 5nm Zen 4 cores... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire