Cray’s Adaptive Supercomputing – A Paradigm Shift

By Christopher Lazou

March 24, 2006

“Adaptive supercomputing will cause a paradigm shift in the way users select and use HPC systems. Adaptive supercomputing is necessary to support the future needs of HPC users as their need for higher performance on more complex applications outpaces Moore's Law. The Cray motto is: adapt the system to the application – not the application to the system,” says Steve Scott, CTO of Cray Inc., March 2006.

This past week Cray announced their vision of “Adaptive Supercomputing,” the company's long-range HPC technology strategy. Steve Scott, CTO of Cray, briefed me about this strategy and I'd like to share with you, in broad terms, what he said.

The increasing demand for better performance can no longer be achieved through processor improvements predicted by Moore's law and a one-size-fits-all mentality. HPC users are no longer getting the performance advances they need from microprocessors. Commercial response to the slowdown in Moore's law has been to provide multi-core chips. These are general-purpose architectures, optimized for most widely used applications. But as it is widely recognized, when scientific computing migrated to commodity platforms, interconnect performance, both in terms of bandwidth and latency, became the limiting factor on overall application performance and remains a bottleneck to this day.

If one takes an example from Earth sciences: Users wish to perform simulations on coupled climate models, such as ocean, atmosphere, biosphere and solid earth. [NASA Report; Earth Sciences Vision 2030]. Currently, these models are designed to run on only one processor architecture (e.g., scalar or vector). However, an increase in both model complexity and number of components lends itself to a variety of processing technologies. With this new approach, applications can have dramatically shorter time scales to completion. The goal is to tie these models together and exchange data.

Another example is from Computer Aided Engineering (CAE). Industry is pushing the limits on the size of the problem and its complexity. Model sizes of CAE, are currently limited by computational and data storage capabilities. Moving to multi-physics simulations and modeling real-world behavior requires coupling previously independent simulations. A full system analysis requires a system with orders of magnitude better performance, since one needs to examine the behavior of composite materials at micro-scale and real-time stress-strain behavior at macro-scale.

The CAE example above was used as a Grand Challenge Case Study in a recent report on High Performance Computing & Competitiveness, sponsored by the Council on Competitiveness in the USA. The report states: “The next high-payoff high performance computing grand challenge is to optimize the design of a complete vehicle by simultaneously simulating all market and regulatory requirements in a single integrated computational model.”

After exhaustive analysis Cray Inc. concluded that, although multi-core commodity processors will deliver some improvement, exploiting parallelism through a variety of processor technologies using scalar, vector, multithreading and hardware accelerators (e.g., FPGAs or ClearSpeed co-processors) creates the greatest opportunity for application acceleration.

Adaptive supercomputing combines multiple processing architectures into a single scalable system. From the user's point of view, one has the application program, which uses libraries, tools, compilers, scheduling system management and a runtime system. Then comes the adaptive software, a compiler, which knows what types of processors are available on the heterogeneous system and targets code to the most appropriate processor. In certain cases, at run-time, the system will determine the most appropriate processor for running a piece of code, and direct the execution accordingly. As Scott said: “Adapt the system to the application – not the application to the system.”

Cray's roadmap to adaptive supercomputing will unfold in phases. Phase 0 represents the current generation. They have individual architecture systems: The Cray XT3 – MPP scalar, the Cray X1E – Vector, the Cray MTA – Multithreaded, and Cray XD1 – AMD Opteron plus FPGA accelerators.

Phase 1, codenamed “Rainer,” will create an integrated user environment across all of Cray's platforms. In Phase 2, Cray plans integrated multi-architecture systems. These are currently codenamed “Eldorado” (upgraded Cray XT3 technology plus multithreading) and “Black Widow” (upgraded Cray XT3 technology plus vector processors) scheduled to become available in 2007. All of these platforms will use AMD Opterons for their scalar processor base.

In Phase 3, the plan is to progress to adaptive supercomputing, in a transparent, scalable, robust, optimized way, using scalar, vector, multithreading and possibly reconfigurable computing. At this phase, one will see the development of Cray systems that incorporate dynamic resource allocation using software that automates adaptive supercomputing. The emerging technologies being developed for Cray's Cascade project are expected to deliver this integrated platform by year 2009/10. Cascade is expected to include heterogeneous processing at the node level, with fast serial, vector and highly multithreaded capability, all in the same cabinet.

To recall, the motivation for Cascade was to address the lack of productivity in large-scale HPC (MPP) machines, based on commodity microprocessors. The reasons why they were unproductive became obvious and rather painful to the user community.

It is a difficult task to write parallel code, using low level constructs in MPI and this is a major burden for computational scientists, especially since programming tools that understand program behavior are in short supply. As it is well known, conventional models break down with scale. And as complexity increases, a lot of time is spent trying to modify code to fit machine's characteristics. For example, cluster machines have relatively low bandwidth between processors and can't directly access global memory. As a result, programmers try hard to reduce communication traffic and have to bundle communication up in messages, instead of simply accessing shared memory. If the machine doesn't match the code's attributes, it makes programming much more difficult.

The biggest challenge comes because application codes vary significantly in their requirement. To scale an application it must have some form of parallelism. Many HPC applications have rich, SIMD-style data-level parallelism. They perform similar operations on arrays of data and can significantly accelerate execution, using fine-grained parallelism. Other application can take advantage of thread-level parallelism. This enables many separate threads to execute independently. This parallelism may be found at multiple levels in the code, allowing significant acceleration via multithreading. Some parts of applications are not parallel at all and need fast serial scalar execution speed, as slow serial performance will drag down performance (Amdahl's Law). Applications also vary in their memory and network bandwidth needs — low vs. high, dense vs. sparse.

According to Cray, the Cascade project has a core mission to ease the development of parallel codes. It will support legacy programming models MPI, OpenMP, as well as improved variants SHMEM, UPC and CAF. In addition, it is developing a new alternative global view, with languages such as Chapel and GMA. It will provide programming tools to ease debugging, tuning and performance analysis. Cray in the Cascade project is designing an adaptive, configurable machine that can match the attributes of a wide variety of applications: fast serial performance, data-level parallelism, multithreading parallelism, as well as regular and sparse bandwidth of varying intensities. The overall objective is delivering a significant increase in performance. These attributes also ease programming and should make the machine much more broadly applicable.

For modern, large-scale systems most hardware cost is in the interconnect packages — circuit boards, connectors, wires, routers, electro-optics, fibers and so on. The task is to make global bandwidth less costly and provide dynamic reconfiguration to match interconnects to customer needs. The challenge is to push signaling rates as much as possible, using the least expensive technology at each level (electrical, optics), design routers that use all network links well and use efficient network topologies.

According to Steve Scott, for ease of programming, global shared memory is unbeatable. It provides the lowest latency communication and lowest overhead communication. It enables fine-grained overlap of computation and communication and tolerates latency with processor concurrency. In contrast, message passing concurrency is constraining and hard to program. Vectors provide concurrency within a thread, multithreading provides concurrency between threads. The challenge is to exploit locality, to reduce bandwidth demand. This is done using hierarchical processor architectures, to enhance temporal locality and lightweight thread migration, to exploit spatial locality. Other techniques to reduce network traffic, such as atomic memory operations and single word network transfers when no locality is present, are also used.

In order to exploit the Cascade architecture in an optimal fashion, Cray specified and is implementing a new high productivity language, named Chapel. Current parallel languages tend to require fragmentation of data and control. They fail to cleanly isolate computation from its virtual processor topology. Also, they tend to support a single type of data parallelism, or, task parallelism and fail to support a composition of parallelism. In short, they have few data abstractions.

On the other hand, Chapel was designed as a language for rapid development of new codes. It supports abstractions for data, task parallelism, arrays (sparse, hierarchical, etc.), graphs, hash tables and so on. Most importantly, it has the ability to evolve prototype code into production very quickly.

Thus, the Cascade project addresses performance by providing configurable high-bandwidth memory and interconnects, globally-addressable memory with fine-grain synchronization and heterogeneous processing to match application needs. It preserves portability with Linux-based OS, standard POSIX API and Linux services. It also provides support for mixed legacy languages and programming models. In addition, Chapel provides an architecturally neutral path forward for code.

In summary, the benefits of Cray's adaptive supercomputing vision are:

  • It provides significant application performance improvement by leveraging many forms of parallelism.
  • It potentially increases productivity by creating a transparent interface to multiple processor types.
  • It provides a familiar Linux user environment.
  • It addresses a wider variety of applications.
  • It creates a low-cost test-bed for experimentation on custom processor technologies.

As stated at the beginning of this article: “Adaptive supercomputing is necessary to support the future needs of HPC users, as their need for higher performance on more complex applications outpaces Moore's Law. Adaptive supercomputing will cause a paradigm shift in the way users select and use HPC systems. Cray's experience, existing investments and innovative technologies position Cray to deliver on the adaptive supercomputing vision,” says Steve Scott.

In my view, the Cray vision for adaptive supercomputing is exciting and the phased strategy is very sensible. The big challenge in the next few years is how to manage the extra complexity both at software and hardware level so that enhanced productivity is delivered to the user application, transparently. Nevertheless, the mission is clear. Cross the one petaflop barrier by the end of this decade.

—–

Copyright (c) Christopher Lazou, HiPerCom Consultants, Ltd., UK. March 2006. Brands and names are the property of their respective owners.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Discovering Alternative Solar Panel Materials with Supercomputing

May 23, 2020

Solar power is quickly growing in the world’s energy mix, but silicon – a crucial material in the construction of photovoltaic solar panels – remains expensive, hindering solar’s expansion and competitiveness wit Read more…

By Oliver Peckham

Nvidia Q1 Earnings Top Expectations, Datacenter Revenue Breaks $1B

May 22, 2020

Nvidia’s seemingly endless roll continued in the first quarter with the company announcing blockbuster earnings that exceeded Wall Street expectations. Nvidia said revenues for the period ended April 26 were up 39 perc Read more…

By Doug Black

TACC Supercomputers Delve into COVID-19’s Spike Protein

May 22, 2020

If you’ve been following COVID-19 research, by now, you’ve probably heard of the spike protein (or S-protein). The spike protein – which gives COVID-19 its namesake crown-like shape – is the virus’ crowbar into Read more…

By Oliver Peckham

Using HPC, Researchers Discover How Easily Hurricanes Form

May 21, 2020

Hurricane formation has long remained shrouded in mystery, with meteorologists unable to discern exactly what forces cause the devastating storms (also known as tropical cyclones) to materialize. Now, researchers at Flor Read more…

By Oliver Peckham

Lab Behind the Record-Setting GPU ‘Cloud Burst’ Joins [email protected]’s COVID-19 Effort

May 20, 2020

Last November, the Wisconsin IceCube Particle Astrophysics Center (WIPAC) set out to break some records with a moonshot project: over a couple of hours, they bought time on as many cloud GPUS as they could – 51,000 – Read more…

By Staff report

AWS Solution Channel

Computational Fluid Dynamics on AWS

Over the past 30 years Computational Fluid Dynamics (CFD) has grown to become a key part of many engineering design processes. From aircraft design to modelling the blood flow in our bodies, the ability to understand the behaviour of fluids has enabled countless innovations and improved the time to market for many products. Read more…

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to review the state of HPC use in life sciences. This is somethin Read more…

By John Russell

HPC in Life Sciences 2020 Part 1: Rise of AMD, Data Management’s Wild West, More 

May 20, 2020

Given the disruption caused by the COVID-19 pandemic and the massive enlistment of major HPC resources to fight the pandemic, it is especially appropriate to re Read more…

By John Russell

Microsoft’s Massive AI Supercomputer on Azure: 285k CPU Cores, 10k GPUs

May 20, 2020

Microsoft has unveiled a supercomputing monster – among the world’s five most powerful, according to the company – aimed at what is known in scientific an Read more…

By Doug Black

AMD Epyc Rome Picked for New Nvidia DGX, but HGX Preserves Intel Option

May 19, 2020

AMD continues to make inroads into the datacenter with its second-generation Epyc "Rome" processor, which last week scored a win with Nvidia's announcement that Read more…

By Tiffany Trader

Hacking Streak Forces European Supercomputers Offline in Midst of COVID-19 Research Effort

May 18, 2020

This week, a number of European supercomputers discovered intrusive malware hosted on their systems. Now, in the midst of a massive supercomputing research effo Read more…

By Oliver Peckham

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Wafer-Scale Engine AI Supercomputer Is Fighting COVID-19

May 13, 2020

Seemingly every supercomputer in the world is allied in the fight against the coronavirus pandemic – but not many of them are fresh out of the box. Cerebras S Read more…

By Oliver Peckham

Startup MemVerge on Memory-centric Mission

May 12, 2020

Memory situated at the center of the computing universe, replacing processing, has long been envisioned as instrumental to radically improved datacenter systems Read more…

By Doug Black

In Australia, HPC Illuminates the Early Universe

May 11, 2020

Many billions of years ago, the universe was a swirling pool of gas. Unraveling the story of how we got from there to here isn’t an easy task, with many simul Read more…

By Oliver Peckham

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Honeywell’s Big Bet on Trapped Ion Quantum Computing

April 7, 2020

Honeywell doesn’t spring to mind when thinking of quantum computing pioneers, but a decade ago the high-tech conglomerate better known for its control systems waded deliberately into the then calmer quantum computing (QC) waters. Fast forward to March when Honeywell announced plans to introduce an ion trap-based quantum computer whose ‘performance’ would... Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This