IEEE Conference Keynoters Lay Out Path to Exascale Computing

By Aaron Dubrow

October 5, 2011

The challenges of exascale computing were the main focus of the three keynote addresses at the IEEE Cluster 2011 conference hosted in Austin, Texas from September 26 through 30. The speakers, renowned leaders in cluster computing, described the obstacles and opportunities involved in building systems one thousand times more powerful than today’s petascale supercomputers. Speaking from the perspective of the software developer (Thomas Sterling), the cluster designer (Liu GuangMing) and the chip architect (Charles Moore), each presented their thoughts on what is needed to reach exascale.

Thomas Sterling, Indiana University, Center for Research in Extreme Scale Technologies (CREST)

With a confidence born from long experience, Thomas Sterling, father of Beowulf, industry veteran, and associate director of the Center for Research in Extreme Scale Technologies (CREST) at Indiana University, kicked off the conference on Tuesday with a keynote on the need for a new paradigm in programming that will be adaptive, intelligent, asynchronous and able to get significantly better performance than today’s execution model.

Before jumping into an explanation of the new programming model, Sterling presented an eccentric history of cluster computing from the MIT Whirlwind project in the 1950s to Norbert Weiner’s cybernetic systems through the Beowulf era, where commodity PCs were first harnessed together to build a powerful cluster, to today’s petaflop mega-machines, one million times faster than the first Beowulf cluster.

Throughout the various phases of supercomputing innovation, several different programming paradigms have emerged, Sterling explained, from serial execution to vector processing to SIMD, to today’s dominant model, which uses MPI (Message Passing Interface) to communicate among many cores.

“Clusters will go through another metamorphosis,” Sterling predicted, adding, “commodity clusters will survive paradigm shifts.”

Current trends suggest the trajectory for computing speed is leveling. Sterling identified a number of problems that may prevent technologists from developing large systems. Power and reliability will be challenging, but Sterling sees the programming model as the biggest obstacle.

In the synchronous model represented by MPI, calculations need to be performed in a specific order, and with precision, to minimize latency, a dance that is difficult to keep up with. Only a handful of codes can run on the hundreds of thousands of cores that are available on today’s large supercomputers. Exascale computers, which Sterling said he hopes to see by the end of the decade, will likely have millions of cores.  At this level of core count, the component reliability and synchronization costs cannot accommodate the usual data-parallel computing approach.

“We must manage asynchrony to allow computing to be self-adaptive,” he said.

As an analogy, he pointed to the difference between a guided missile and a cannon. MPI represents an uncontrolled, ballistic, brute force method to solve problems. The new paradigm, or “experimental execution model” presented by Sterling, is exemplified by his own project, the ParalleX Research Group.

“ParalleX is an abstract test bed to explore the synthesis of ideas for current and extreme scale applications,” Sterling said. “We want to bring strong scaled applications back into the cluster world.”

His software employs micro-checkpointing: ephemeral detection and correction on the fly, and introspection (a kind of machine learning) closing the loop, as in cybernetics, to constantly adjust like the guided missile. It also manages asynchrony by “constraint-based synchronization.”

“You don’t want to tell the program when to do the tasks,” Sterling said. “You want to tell the program the conditions under which the task can be done. This allows the program to decide on its own when to undertake a given task.”

He pointed to initial performance gains for porting the adaptive mesh refinement algorithm for astrophysics to work on ParalleX execution. Results showed an improvement in performance of two to three times by changing the underlying context from MPI to ParalleX.

Some of these same goals are being pursued in a few significant, but not particularly well-known programming experiments, according to Sterling. In addition to ParalleX, he discussed examples from the StarsS project at the Barcelona Supercomputing Center, which employ a new model for data flow executions, and the SWift Adaptive Runtime Machine (SWARM) by ET International.

These execution models may not yet provide optimal computing, Sterling admitted, but the solutions being developed are needed for the community to advance.

“Cluster computing is going through a phase transition,” he asserted. “It will take leadership in this new paradigm shift and it will be the medium where a new paradigm is manifested. “

The tools are open source and XPI, the API for the execution environment, is in alpha testing and available to friendly users. It will be released soon to the general public.

Liu GuangMing, Director, National Supercomputer Center, Tianjin, China

Liu GuangMing, the designer of Tianhe-1A — China’s most powerful supercomputer and the second most powerful in the world — began his Wednesday keynote with an overview of the system deployed at the National Supercomputer Center in Tianjin, China.  He followed with an analysis of the barriers that designers face in building an exascale system.

Built from 143,336 Intel CPU processors, 7168 NVIDIA GPUs, and 2048 Galaxy FT-1000 eight-core processors designed by Liu himself, Tianhe-1A has a peak performance of 2.56 petaflops. The hybrid cluster is comprised largely of commodity parts; however, a few of the components, including the interconnects and FT chips, are proprietary.

“To get to the petascale, you can choose a traditional design or a new design,” Liu said. “We have been looking for a new way to design and implement a petaflop supercomputer.”

When it was deployed in 2010, many in the HPC world questioned Tianhe-1A’s ability to run scientific applications efficiently. Liu described a broad range of problems that used thousands to hundreds of thousands of processors with great efficiency, from seismic imaging for petroleum exploration to decoding the genome of the E. coli bacteria that sickened thousands in Germany. These results were delivered and put to bed some of the questions about Tianhe-1A’s usability.

After describing the technological and scientific successes of Tianhe-1A, Liu transitioned to a discussion of the problems associated with future exascale systems. He divided the problems into five categories: power, memory, communication, reliability, and application scalability, and quantified each problem with mathematical models.

Literally.

Transforming each of the main challenges into equations, he described how the models depict the obstacles facing continued speedups. The goal of this endeavor was to “build a synthesized speedup model and define quantitatively the ‘walls’,” Liu said.

He went on to suggest potential ways over each wall, sometimes through concerted effort by the HPC community, sometimes through emerging innovations.

Liu also showed enthusiasm for untested, emerging technologies such as optical or wireless interconnects, nanoelectronics and quantum and DNA computing, all of which he expects to play a role in the evolution of new systems. He pointed to the high-speed 3D interconnects associated with the Cray XT5 and Fujitsu K computer systems as examples of current technologies that he believes are on the right path to reaching the exascale.

Liu also gave examples of instances where the community must do a better job of optimizing applications for larger systems. Speaking about computer memory, he classified six types of data access that must be considered when speeding-up and scaling-up applications to tens of thousands of cores.

“Traditional optimization techniques usually consider only some of these characteristics,” Liu said. “We must consider all six characteristics and create a harmonious optimization algorithm.”

This holistic, deep thinking about the interrelationship of various levels of computation were the main message of Liu’s presentation. He repeatedly returned to graphs that showed the impact of various processes, from memory access and communication, to power consumption and cost, on the overall time and efficiency of computation.

“To reach the exascale, we must research solutions at all system levels,” Liu concluded.

Charles Moore, Corporate Fellow and the Technology Group CTO, Advanced Micro Devices

Reaching exascale was the subtext of Charles Moore’s Thursday keynote at IEEE Cluster 2011, but AMD’s emerging line of accelerated processing units (APUs) was the real subject of his talk.

APUs are a class of chip that Moore believes will power future exascale systems. According to Moore, exascale systems will achieve their massive speedup by using both CPUs and GPUs or other accelerators.

“We are approaching what we at AMD call the heterogeneous systems era,” Moore said. That alone is not groundbreaking; what is important is the fact but for AMD, these cores will all be located on the same chip.

Among the chips discussed by Moore were the “Brazos” E-series Fusion APU, which contains dual cores, dual GPUs, and a video accelerator on a single chip. It achieves 90 gigaflops of single-precision performance using just 18W TDP. “Desna,” Brazos’ little cousin, runs on only 6W, and is suitable for passively cooled designs like tablets. “Llano,” AMD’s higher-end chip, will have four CPU cores, advanced GPUs, and will offer 500 gigaflops of compute power per node.

One advantage of AMD’s new line is that you “can use this chip for graphics or as a compute offload or both at the same time,” Moore said.

The powerful chips that Moore prophesied won’t quite take us to the exascale, but they will get us most of the way, he said. For exascale, an overhaul of the memory architecture and programming models is needed.

Moore alluded to 3D stacked memory being developed by AMD as a possible technological solution to memory access problems. He also described the new AMD Fusion system architecture, where the goal is “making the GPU a first class citizen in the system architecture.”

The Fusion system architecture itself is “agnostic for CPU and GPU.”  “We’ll add other accelerators to this frame in the future,” Moore said. “It’s not just about GPUs, it’s about heterogeneous computing in general.”

Openness was a common theme in the last part of Moore’s talk where he described AMD’s long-standing dedication to open source software and standards. He discussed emerging standards including HyperShare, the Open Compute Project, and the Common Communication Interface, which he believes will play key roles in getting to exascale.

“Open standards are the basis for large ecosystems,” he said. “If you look over time, open standards always win.”

Looking beyond the next-generation of chips, Moore described the potential for an “awesome exascale-class” 10-teraflop x86 APU computing node feasible in the 2018 timeframe.

“We intend to make the unprecedented processing capability of the APU as accessible to programmers as the CPU is today.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Musk’s Latest Startup Eyes Brain-Computer Links

April 21, 2017

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers. Read more…

By George Leopold

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Nvidia P100 Shows 1.3-2.3x Speedup Over K80 GPU on Financial Apps

April 20, 2017

When it comes to the true performance of the latest silicon, every end user knows that the best processor is the one that works best for their application. Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Remote Visualization Optimizing Life Sciences Operations and Care Delivery

As patients continually demand a better quality of care and increasingly complex workloads challenge healthcare organizations to innovate, investing in the right technologies is key to ensuring growth and success. Read more…

Quantum Adds Global Smarts to StorNext File System

April 20, 2017

Companies that use Quantum’s StorNext platform to store massive amounts of data this week got a glimpse of new storage capabilities that should make it easier to access their data horde from anywhere in the world. Read more…

By Alex Woodie

Scaling an HPC Career in Nepal Can Be a Steep Climb

April 20, 2017

Umesh Upadhyaya works as an IT Associate at the International Centre for Integrated Mountain Development (ICIMOD) in Nepal, which supports the country’s one and only HPC facility. He is directly involved in an initiative that focuses on climate change and atmosphere modeling Read more…

By Nages Sieslack

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

Intel Open Sources All Lustre Work, Brent Gorda Exits

April 19, 2017

In a letter to the Lustre community posted on the Intel website, Vice President of Intel's Data Center Group Trish Damkroger writes that effective immediately the company will be contributing all Lustre development to the open source community. Damkroger also announced that Brent Gorda, General Manager, High Performance Data Division at Intel is leaving the company. Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

CERN openlab Explores New CPU/FPGA Processing Solutions

April 14, 2017

Through a CERN openlab project known as the ‘High-Throughput Computing Collaboration,’ researchers are investigating the use of various Intel technologies in data filtering and data acquisition systems. Read more…

By Linda Barney

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Penguin Takes a Run at the Big Cloud Providers

April 12, 2017

HPC specialist Penguin Computing recently re-ran benchmarks from a study of its larger brethren and says the results show its ‘public cloud’ – Penguin on Demand (POD) – is among the leaders in cost and performance. Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

HPC and the Colocation Datacenter – a Bridge Too Far?

April 7, 2017

A more standardised HPC platform approach is making the running of HPC projects within increasing financial reach. Read more…

By Clive Longbottom, Quocirca

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Leading Solution Providers

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This