IEEE Conference Keynoters Lay Out Path to Exascale Computing

By Aaron Dubrow

October 5, 2011

The challenges of exascale computing were the main focus of the three keynote addresses at the IEEE Cluster 2011 conference hosted in Austin, Texas from September 26 through 30. The speakers, renowned leaders in cluster computing, described the obstacles and opportunities involved in building systems one thousand times more powerful than today’s petascale supercomputers. Speaking from the perspective of the software developer (Thomas Sterling), the cluster designer (Liu GuangMing) and the chip architect (Charles Moore), each presented their thoughts on what is needed to reach exascale.

Thomas Sterling, Indiana University, Center for Research in Extreme Scale Technologies (CREST)

With a confidence born from long experience, Thomas Sterling, father of Beowulf, industry veteran, and associate director of the Center for Research in Extreme Scale Technologies (CREST) at Indiana University, kicked off the conference on Tuesday with a keynote on the need for a new paradigm in programming that will be adaptive, intelligent, asynchronous and able to get significantly better performance than today’s execution model.

Before jumping into an explanation of the new programming model, Sterling presented an eccentric history of cluster computing from the MIT Whirlwind project in the 1950s to Norbert Weiner’s cybernetic systems through the Beowulf era, where commodity PCs were first harnessed together to build a powerful cluster, to today’s petaflop mega-machines, one million times faster than the first Beowulf cluster.

Throughout the various phases of supercomputing innovation, several different programming paradigms have emerged, Sterling explained, from serial execution to vector processing to SIMD, to today’s dominant model, which uses MPI (Message Passing Interface) to communicate among many cores.

“Clusters will go through another metamorphosis,” Sterling predicted, adding, “commodity clusters will survive paradigm shifts.”

Current trends suggest the trajectory for computing speed is leveling. Sterling identified a number of problems that may prevent technologists from developing large systems. Power and reliability will be challenging, but Sterling sees the programming model as the biggest obstacle.

In the synchronous model represented by MPI, calculations need to be performed in a specific order, and with precision, to minimize latency, a dance that is difficult to keep up with. Only a handful of codes can run on the hundreds of thousands of cores that are available on today’s large supercomputers. Exascale computers, which Sterling said he hopes to see by the end of the decade, will likely have millions of cores.  At this level of core count, the component reliability and synchronization costs cannot accommodate the usual data-parallel computing approach.

“We must manage asynchrony to allow computing to be self-adaptive,” he said.

As an analogy, he pointed to the difference between a guided missile and a cannon. MPI represents an uncontrolled, ballistic, brute force method to solve problems. The new paradigm, or “experimental execution model” presented by Sterling, is exemplified by his own project, the ParalleX Research Group.

“ParalleX is an abstract test bed to explore the synthesis of ideas for current and extreme scale applications,” Sterling said. “We want to bring strong scaled applications back into the cluster world.”

His software employs micro-checkpointing: ephemeral detection and correction on the fly, and introspection (a kind of machine learning) closing the loop, as in cybernetics, to constantly adjust like the guided missile. It also manages asynchrony by “constraint-based synchronization.”

“You don’t want to tell the program when to do the tasks,” Sterling said. “You want to tell the program the conditions under which the task can be done. This allows the program to decide on its own when to undertake a given task.”

He pointed to initial performance gains for porting the adaptive mesh refinement algorithm for astrophysics to work on ParalleX execution. Results showed an improvement in performance of two to three times by changing the underlying context from MPI to ParalleX.

Some of these same goals are being pursued in a few significant, but not particularly well-known programming experiments, according to Sterling. In addition to ParalleX, he discussed examples from the StarsS project at the Barcelona Supercomputing Center, which employ a new model for data flow executions, and the SWift Adaptive Runtime Machine (SWARM) by ET International.

These execution models may not yet provide optimal computing, Sterling admitted, but the solutions being developed are needed for the community to advance.

“Cluster computing is going through a phase transition,” he asserted. “It will take leadership in this new paradigm shift and it will be the medium where a new paradigm is manifested. “

The tools are open source and XPI, the API for the execution environment, is in alpha testing and available to friendly users. It will be released soon to the general public.

Liu GuangMing, Director, National Supercomputer Center, Tianjin, China

Liu GuangMing, the designer of Tianhe-1A — China’s most powerful supercomputer and the second most powerful in the world — began his Wednesday keynote with an overview of the system deployed at the National Supercomputer Center in Tianjin, China.  He followed with an analysis of the barriers that designers face in building an exascale system.

Built from 143,336 Intel CPU processors, 7168 NVIDIA GPUs, and 2048 Galaxy FT-1000 eight-core processors designed by Liu himself, Tianhe-1A has a peak performance of 2.56 petaflops. The hybrid cluster is comprised largely of commodity parts; however, a few of the components, including the interconnects and FT chips, are proprietary.

“To get to the petascale, you can choose a traditional design or a new design,” Liu said. “We have been looking for a new way to design and implement a petaflop supercomputer.”

When it was deployed in 2010, many in the HPC world questioned Tianhe-1A’s ability to run scientific applications efficiently. Liu described a broad range of problems that used thousands to hundreds of thousands of processors with great efficiency, from seismic imaging for petroleum exploration to decoding the genome of the E. coli bacteria that sickened thousands in Germany. These results were delivered and put to bed some of the questions about Tianhe-1A’s usability.

After describing the technological and scientific successes of Tianhe-1A, Liu transitioned to a discussion of the problems associated with future exascale systems. He divided the problems into five categories: power, memory, communication, reliability, and application scalability, and quantified each problem with mathematical models.

Literally.

Transforming each of the main challenges into equations, he described how the models depict the obstacles facing continued speedups. The goal of this endeavor was to “build a synthesized speedup model and define quantitatively the ‘walls’,” Liu said.

He went on to suggest potential ways over each wall, sometimes through concerted effort by the HPC community, sometimes through emerging innovations.

Liu also showed enthusiasm for untested, emerging technologies such as optical or wireless interconnects, nanoelectronics and quantum and DNA computing, all of which he expects to play a role in the evolution of new systems. He pointed to the high-speed 3D interconnects associated with the Cray XT5 and Fujitsu K computer systems as examples of current technologies that he believes are on the right path to reaching the exascale.

Liu also gave examples of instances where the community must do a better job of optimizing applications for larger systems. Speaking about computer memory, he classified six types of data access that must be considered when speeding-up and scaling-up applications to tens of thousands of cores.

“Traditional optimization techniques usually consider only some of these characteristics,” Liu said. “We must consider all six characteristics and create a harmonious optimization algorithm.”

This holistic, deep thinking about the interrelationship of various levels of computation were the main message of Liu’s presentation. He repeatedly returned to graphs that showed the impact of various processes, from memory access and communication, to power consumption and cost, on the overall time and efficiency of computation.

“To reach the exascale, we must research solutions at all system levels,” Liu concluded.

Charles Moore, Corporate Fellow and the Technology Group CTO, Advanced Micro Devices

Reaching exascale was the subtext of Charles Moore’s Thursday keynote at IEEE Cluster 2011, but AMD’s emerging line of accelerated processing units (APUs) was the real subject of his talk.

APUs are a class of chip that Moore believes will power future exascale systems. According to Moore, exascale systems will achieve their massive speedup by using both CPUs and GPUs or other accelerators.

“We are approaching what we at AMD call the heterogeneous systems era,” Moore said. That alone is not groundbreaking; what is important is the fact but for AMD, these cores will all be located on the same chip.

Among the chips discussed by Moore were the “Brazos” E-series Fusion APU, which contains dual cores, dual GPUs, and a video accelerator on a single chip. It achieves 90 gigaflops of single-precision performance using just 18W TDP. “Desna,” Brazos’ little cousin, runs on only 6W, and is suitable for passively cooled designs like tablets. “Llano,” AMD’s higher-end chip, will have four CPU cores, advanced GPUs, and will offer 500 gigaflops of compute power per node.

One advantage of AMD’s new line is that you “can use this chip for graphics or as a compute offload or both at the same time,” Moore said.

The powerful chips that Moore prophesied won’t quite take us to the exascale, but they will get us most of the way, he said. For exascale, an overhaul of the memory architecture and programming models is needed.

Moore alluded to 3D stacked memory being developed by AMD as a possible technological solution to memory access problems. He also described the new AMD Fusion system architecture, where the goal is “making the GPU a first class citizen in the system architecture.”

The Fusion system architecture itself is “agnostic for CPU and GPU.”  “We’ll add other accelerators to this frame in the future,” Moore said. “It’s not just about GPUs, it’s about heterogeneous computing in general.”

Openness was a common theme in the last part of Moore’s talk where he described AMD’s long-standing dedication to open source software and standards. He discussed emerging standards including HyperShare, the Open Compute Project, and the Common Communication Interface, which he believes will play key roles in getting to exascale.

“Open standards are the basis for large ecosystems,” he said. “If you look over time, open standards always win.”

Looking beyond the next-generation of chips, Moore described the potential for an “awesome exascale-class” 10-teraflop x86 APU computing node feasible in the 2018 timeframe.

“We intend to make the unprecedented processing capability of the APU as accessible to programmers as the CPU is today.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HPE to provide the DoD High Performance Computing Modernizatio Read more…

By Tiffany Trader

Topological Quantum Superconductor Progress Reported

February 20, 2018

Overcoming sensitivity to decoherence is a persistent stumbling block in efforts to build effective quantum computers. Now, a group of researchers from Chalmers University of Technology (Sweden) report progress in devisi Read more…

By John Russell

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penalties to HPC applications. Even as these patches are rolled o Read more…

By Pete Beckman

HPE Extreme Performance Solutions

Safeguard Your HPC Environment with the World’s Most Secure Industry Standard Servers

Today’s organizations operate in an environment with ever-evolving threats, and in order to protect themselves they must continuously bolster their security strategy. Hewlett Packard Enterprise (HPE) and Intel® are addressing modern security challenges with the world’s most secure industry standard servers powered by the latest generation of Intel® Xeon® Scalable processors. Read more…

Intel Touts Silicon Spin Qubits for Quantum Computing

February 14, 2018

Debate around what makes a good qubit and how best to manufacture them is a sprawling topic. There are many insistent voices favoring one or another approach. Referencing a paper published today in Nature, Intel has offe Read more…

By John Russell

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Singularity HPC Container Start-Up – Sylabs – Emerges from Stealth

February 8, 2018

The driving force behind Singularity, the popular HPC container technology, is bringing the open source platform to the enterprise with the launch of a new vent Read more…

By George Leopold

Dell EMC Debuts PowerEdge Servers with AMD EPYC Chips

February 6, 2018

AMD notched another EPYC processor win today with Dell EMC’s introduction of three PowerEdge servers (R6415, R7415, and R7425) based on the EPYC 7000-series p Read more…

By John Russell

‘Next Generation’ Universe Simulation Is Most Advanced Yet

February 5, 2018

The research group that gave us the most detailed time-lapse simulation of the universe’s evolution in 2014, spanning 13.8 billion years of cosmic evolution, is back in the spotlight with an even more advanced cosmological model that is providing new insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed, and where magnetic fields originate. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

SC17: Singularity Preps Version 3.0, Nears 1M Containers Served Daily

November 1, 2017

Just a few months ago about half a million jobs were being run daily using Singularity containers, the LBNL-founded container platform intended for HPC. That wa Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This