IEEE Conference Keynoters Lay Out Path to Exascale Computing

By Aaron Dubrow

October 5, 2011

The challenges of exascale computing were the main focus of the three keynote addresses at the IEEE Cluster 2011 conference hosted in Austin, Texas from September 26 through 30. The speakers, renowned leaders in cluster computing, described the obstacles and opportunities involved in building systems one thousand times more powerful than today’s petascale supercomputers. Speaking from the perspective of the software developer (Thomas Sterling), the cluster designer (Liu GuangMing) and the chip architect (Charles Moore), each presented their thoughts on what is needed to reach exascale.

Thomas Sterling, Indiana University, Center for Research in Extreme Scale Technologies (CREST)

With a confidence born from long experience, Thomas Sterling, father of Beowulf, industry veteran, and associate director of the Center for Research in Extreme Scale Technologies (CREST) at Indiana University, kicked off the conference on Tuesday with a keynote on the need for a new paradigm in programming that will be adaptive, intelligent, asynchronous and able to get significantly better performance than today’s execution model.

Before jumping into an explanation of the new programming model, Sterling presented an eccentric history of cluster computing from the MIT Whirlwind project in the 1950s to Norbert Weiner’s cybernetic systems through the Beowulf era, where commodity PCs were first harnessed together to build a powerful cluster, to today’s petaflop mega-machines, one million times faster than the first Beowulf cluster.

Throughout the various phases of supercomputing innovation, several different programming paradigms have emerged, Sterling explained, from serial execution to vector processing to SIMD, to today’s dominant model, which uses MPI (Message Passing Interface) to communicate among many cores.

“Clusters will go through another metamorphosis,” Sterling predicted, adding, “commodity clusters will survive paradigm shifts.”

Current trends suggest the trajectory for computing speed is leveling. Sterling identified a number of problems that may prevent technologists from developing large systems. Power and reliability will be challenging, but Sterling sees the programming model as the biggest obstacle.

In the synchronous model represented by MPI, calculations need to be performed in a specific order, and with precision, to minimize latency, a dance that is difficult to keep up with. Only a handful of codes can run on the hundreds of thousands of cores that are available on today’s large supercomputers. Exascale computers, which Sterling said he hopes to see by the end of the decade, will likely have millions of cores.  At this level of core count, the component reliability and synchronization costs cannot accommodate the usual data-parallel computing approach.

“We must manage asynchrony to allow computing to be self-adaptive,” he said.

As an analogy, he pointed to the difference between a guided missile and a cannon. MPI represents an uncontrolled, ballistic, brute force method to solve problems. The new paradigm, or “experimental execution model” presented by Sterling, is exemplified by his own project, the ParalleX Research Group.

“ParalleX is an abstract test bed to explore the synthesis of ideas for current and extreme scale applications,” Sterling said. “We want to bring strong scaled applications back into the cluster world.”

His software employs micro-checkpointing: ephemeral detection and correction on the fly, and introspection (a kind of machine learning) closing the loop, as in cybernetics, to constantly adjust like the guided missile. It also manages asynchrony by “constraint-based synchronization.”

“You don’t want to tell the program when to do the tasks,” Sterling said. “You want to tell the program the conditions under which the task can be done. This allows the program to decide on its own when to undertake a given task.”

He pointed to initial performance gains for porting the adaptive mesh refinement algorithm for astrophysics to work on ParalleX execution. Results showed an improvement in performance of two to three times by changing the underlying context from MPI to ParalleX.

Some of these same goals are being pursued in a few significant, but not particularly well-known programming experiments, according to Sterling. In addition to ParalleX, he discussed examples from the StarsS project at the Barcelona Supercomputing Center, which employ a new model for data flow executions, and the SWift Adaptive Runtime Machine (SWARM) by ET International.

These execution models may not yet provide optimal computing, Sterling admitted, but the solutions being developed are needed for the community to advance.

“Cluster computing is going through a phase transition,” he asserted. “It will take leadership in this new paradigm shift and it will be the medium where a new paradigm is manifested. “

The tools are open source and XPI, the API for the execution environment, is in alpha testing and available to friendly users. It will be released soon to the general public.

Liu GuangMing, Director, National Supercomputer Center, Tianjin, China

Liu GuangMing, the designer of Tianhe-1A — China’s most powerful supercomputer and the second most powerful in the world — began his Wednesday keynote with an overview of the system deployed at the National Supercomputer Center in Tianjin, China.  He followed with an analysis of the barriers that designers face in building an exascale system.

Built from 143,336 Intel CPU processors, 7168 NVIDIA GPUs, and 2048 Galaxy FT-1000 eight-core processors designed by Liu himself, Tianhe-1A has a peak performance of 2.56 petaflops. The hybrid cluster is comprised largely of commodity parts; however, a few of the components, including the interconnects and FT chips, are proprietary.

“To get to the petascale, you can choose a traditional design or a new design,” Liu said. “We have been looking for a new way to design and implement a petaflop supercomputer.”

When it was deployed in 2010, many in the HPC world questioned Tianhe-1A’s ability to run scientific applications efficiently. Liu described a broad range of problems that used thousands to hundreds of thousands of processors with great efficiency, from seismic imaging for petroleum exploration to decoding the genome of the E. coli bacteria that sickened thousands in Germany. These results were delivered and put to bed some of the questions about Tianhe-1A’s usability.

After describing the technological and scientific successes of Tianhe-1A, Liu transitioned to a discussion of the problems associated with future exascale systems. He divided the problems into five categories: power, memory, communication, reliability, and application scalability, and quantified each problem with mathematical models.

Literally.

Transforming each of the main challenges into equations, he described how the models depict the obstacles facing continued speedups. The goal of this endeavor was to “build a synthesized speedup model and define quantitatively the ‘walls’,” Liu said.

He went on to suggest potential ways over each wall, sometimes through concerted effort by the HPC community, sometimes through emerging innovations.

Liu also showed enthusiasm for untested, emerging technologies such as optical or wireless interconnects, nanoelectronics and quantum and DNA computing, all of which he expects to play a role in the evolution of new systems. He pointed to the high-speed 3D interconnects associated with the Cray XT5 and Fujitsu K computer systems as examples of current technologies that he believes are on the right path to reaching the exascale.

Liu also gave examples of instances where the community must do a better job of optimizing applications for larger systems. Speaking about computer memory, he classified six types of data access that must be considered when speeding-up and scaling-up applications to tens of thousands of cores.

“Traditional optimization techniques usually consider only some of these characteristics,” Liu said. “We must consider all six characteristics and create a harmonious optimization algorithm.”

This holistic, deep thinking about the interrelationship of various levels of computation were the main message of Liu’s presentation. He repeatedly returned to graphs that showed the impact of various processes, from memory access and communication, to power consumption and cost, on the overall time and efficiency of computation.

“To reach the exascale, we must research solutions at all system levels,” Liu concluded.

Charles Moore, Corporate Fellow and the Technology Group CTO, Advanced Micro Devices

Reaching exascale was the subtext of Charles Moore’s Thursday keynote at IEEE Cluster 2011, but AMD’s emerging line of accelerated processing units (APUs) was the real subject of his talk.

APUs are a class of chip that Moore believes will power future exascale systems. According to Moore, exascale systems will achieve their massive speedup by using both CPUs and GPUs or other accelerators.

“We are approaching what we at AMD call the heterogeneous systems era,” Moore said. That alone is not groundbreaking; what is important is the fact but for AMD, these cores will all be located on the same chip.

Among the chips discussed by Moore were the “Brazos” E-series Fusion APU, which contains dual cores, dual GPUs, and a video accelerator on a single chip. It achieves 90 gigaflops of single-precision performance using just 18W TDP. “Desna,” Brazos’ little cousin, runs on only 6W, and is suitable for passively cooled designs like tablets. “Llano,” AMD’s higher-end chip, will have four CPU cores, advanced GPUs, and will offer 500 gigaflops of compute power per node.

One advantage of AMD’s new line is that you “can use this chip for graphics or as a compute offload or both at the same time,” Moore said.

The powerful chips that Moore prophesied won’t quite take us to the exascale, but they will get us most of the way, he said. For exascale, an overhaul of the memory architecture and programming models is needed.

Moore alluded to 3D stacked memory being developed by AMD as a possible technological solution to memory access problems. He also described the new AMD Fusion system architecture, where the goal is “making the GPU a first class citizen in the system architecture.”

The Fusion system architecture itself is “agnostic for CPU and GPU.”  “We’ll add other accelerators to this frame in the future,” Moore said. “It’s not just about GPUs, it’s about heterogeneous computing in general.”

Openness was a common theme in the last part of Moore’s talk where he described AMD’s long-standing dedication to open source software and standards. He discussed emerging standards including HyperShare, the Open Compute Project, and the Common Communication Interface, which he believes will play key roles in getting to exascale.

“Open standards are the basis for large ecosystems,” he said. “If you look over time, open standards always win.”

Looking beyond the next-generation of chips, Moore described the potential for an “awesome exascale-class” 10-teraflop x86 APU computing node feasible in the 2018 timeframe.

“We intend to make the unprecedented processing capability of the APU as accessible to programmers as the CPU is today.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s Hot and What’s Not at ISC 2018?

June 22, 2018

As the calendar rolls around to late June we see the ISC conference, held in Frankfurt (June 24th-28th), heave into view. With some of the pre-show announcements already starting to roll out, what do we think some of the Read more…

By Dairsie Latimer

Servers in Orbit, HPE Apollos Make 4,500 Trips Around Earth

June 22, 2018

The International Space Station shines a little brighter in the night sky thanks to what amounts to an orbiting supercomputer lofted to the outpost last year as part of a year-long experiment to determine if high-end com Read more…

By George Leopold

HPCwire Readers’ and Editors’ Choice Awards Turns 15

June 22, 2018

A hallmark of sustainability is this: If you are not serving a need effectively and efficiently you do not last. The HPCwire Readers’ and Editors’ Choice awards program has stood the test of time. Each year, our read Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Taking the AI Training Wheels Off: From PoC to Production

Even though it seems simple now, there were a lot of skills to master in learning to ride a bike. From balancing on two wheels, and steering in a straight line, to going around corners and stopping before running over the dog, it took lots of practice to master these skills. Read more…

Tribute: Dr. Bob Borchers, 1936-2018

June 21, 2018

Dr. Bob Borchers, a leader in the high performance computing community for decades, passed away peacefully in Maui, Hawaii, on June 7th. His memorial service will be held on June 22nd in Reston, Virginia. Dr. Borchers Read more…

By Ann Redelfs

What’s Hot and What’s Not at ISC 2018?

June 22, 2018

As the calendar rolls around to late June we see the ISC conference, held in Frankfurt (June 24th-28th), heave into view. With some of the pre-show announcement Read more…

By Dairsie Latimer

Servers in Orbit, HPE Apollos Make 4,500 Trips Around Earth

June 22, 2018

The International Space Station shines a little brighter in the night sky thanks to what amounts to an orbiting supercomputer lofted to the outpost last year as Read more…

By George Leopold

HPCwire Readers’ and Editors’ Choice Awards Turns 15

June 22, 2018

A hallmark of sustainability is this: If you are not serving a need effectively and efficiently you do not last. The HPCwire Readers’ and Editors’ Choice aw Read more…

By Tiffany Trader

ISC 2018 Preview from @hpcnotes

June 21, 2018

Prepare for your social media feed to be saturated with #HPC, #ISC18, #Top500, etc. Prepare for your mainstream media to talk about supercomputers (in between t Read more…

By Andrew Jones

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

European HPC Summit Week and PRACEdays 2018: Slaying Dragons and SHAPEing Futures One SME at a Time

June 20, 2018

The University of Ljubljana in Slovenia hosted the third annual EHPCSW18 and fifth annual PRACEdays18 events which opened May 29, 2018. The conference was chair Read more…

By Elizabeth Leake (STEM-Trek for HPCwire)

Cray Introduces All Flash Lustre Storage Solution Targeting HPC

June 19, 2018

Citing the rise of IOPS-intensive workflows and more affordable flash technology, Cray today introduced the L300F, a scalable all-flash storage solution whose p Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Google Charts Two-Dimensional Quantum Course

April 26, 2018

Quantum error correction, essential for achieving universal fault-tolerant quantum computation, is one of the main challenges of the quantum computing field and it’s top of mind for Google’s John Martinis. At a presentation last week at the HPC User Forum in Tucson, Martinis, one of the world's foremost experts in quantum computing, emphasized... Read more…

By Tiffany Trader

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This