Cray’s Adaptive Supercomputing – A Paradigm Shift

By Christopher Lazou

March 24, 2006

“Adaptive supercomputing will cause a paradigm shift in the way users select and use HPC systems. Adaptive supercomputing is necessary to support the future needs of HPC users as their need for higher performance on more complex applications outpaces Moore's Law. The Cray motto is: adapt the system to the application – not the application to the system,” says Steve Scott, CTO of Cray Inc., March 2006.

This past week Cray announced their vision of “Adaptive Supercomputing,” the company's long-range HPC technology strategy. Steve Scott, CTO of Cray, briefed me about this strategy and I'd like to share with you, in broad terms, what he said.

The increasing demand for better performance can no longer be achieved through processor improvements predicted by Moore's law and a one-size-fits-all mentality. HPC users are no longer getting the performance advances they need from microprocessors. Commercial response to the slowdown in Moore's law has been to provide multi-core chips. These are general-purpose architectures, optimized for most widely used applications. But as it is widely recognized, when scientific computing migrated to commodity platforms, interconnect performance, both in terms of bandwidth and latency, became the limiting factor on overall application performance and remains a bottleneck to this day.

If one takes an example from Earth sciences: Users wish to perform simulations on coupled climate models, such as ocean, atmosphere, biosphere and solid earth. [NASA Report; Earth Sciences Vision 2030]. Currently, these models are designed to run on only one processor architecture (e.g., scalar or vector). However, an increase in both model complexity and number of components lends itself to a variety of processing technologies. With this new approach, applications can have dramatically shorter time scales to completion. The goal is to tie these models together and exchange data.

Another example is from Computer Aided Engineering (CAE). Industry is pushing the limits on the size of the problem and its complexity. Model sizes of CAE, are currently limited by computational and data storage capabilities. Moving to multi-physics simulations and modeling real-world behavior requires coupling previously independent simulations. A full system analysis requires a system with orders of magnitude better performance, since one needs to examine the behavior of composite materials at micro-scale and real-time stress-strain behavior at macro-scale.

The CAE example above was used as a Grand Challenge Case Study in a recent report on High Performance Computing & Competitiveness, sponsored by the Council on Competitiveness in the USA. The report states: “The next high-payoff high performance computing grand challenge is to optimize the design of a complete vehicle by simultaneously simulating all market and regulatory requirements in a single integrated computational model.”

After exhaustive analysis Cray Inc. concluded that, although multi-core commodity processors will deliver some improvement, exploiting parallelism through a variety of processor technologies using scalar, vector, multithreading and hardware accelerators (e.g., FPGAs or ClearSpeed co-processors) creates the greatest opportunity for application acceleration.

Adaptive supercomputing combines multiple processing architectures into a single scalable system. From the user's point of view, one has the application program, which uses libraries, tools, compilers, scheduling system management and a runtime system. Then comes the adaptive software, a compiler, which knows what types of processors are available on the heterogeneous system and targets code to the most appropriate processor. In certain cases, at run-time, the system will determine the most appropriate processor for running a piece of code, and direct the execution accordingly. As Scott said: “Adapt the system to the application – not the application to the system.”

Cray's roadmap to adaptive supercomputing will unfold in phases. Phase 0 represents the current generation. They have individual architecture systems: The Cray XT3 – MPP scalar, the Cray X1E – Vector, the Cray MTA – Multithreaded, and Cray XD1 – AMD Opteron plus FPGA accelerators.

Phase 1, codenamed “Rainer,” will create an integrated user environment across all of Cray's platforms. In Phase 2, Cray plans integrated multi-architecture systems. These are currently codenamed “Eldorado” (upgraded Cray XT3 technology plus multithreading) and “Black Widow” (upgraded Cray XT3 technology plus vector processors) scheduled to become available in 2007. All of these platforms will use AMD Opterons for their scalar processor base.

In Phase 3, the plan is to progress to adaptive supercomputing, in a transparent, scalable, robust, optimized way, using scalar, vector, multithreading and possibly reconfigurable computing. At this phase, one will see the development of Cray systems that incorporate dynamic resource allocation using software that automates adaptive supercomputing. The emerging technologies being developed for Cray's Cascade project are expected to deliver this integrated platform by year 2009/10. Cascade is expected to include heterogeneous processing at the node level, with fast serial, vector and highly multithreaded capability, all in the same cabinet.

To recall, the motivation for Cascade was to address the lack of productivity in large-scale HPC (MPP) machines, based on commodity microprocessors. The reasons why they were unproductive became obvious and rather painful to the user community.

It is a difficult task to write parallel code, using low level constructs in MPI and this is a major burden for computational scientists, especially since programming tools that understand program behavior are in short supply. As it is well known, conventional models break down with scale. And as complexity increases, a lot of time is spent trying to modify code to fit machine's characteristics. For example, cluster machines have relatively low bandwidth between processors and can't directly access global memory. As a result, programmers try hard to reduce communication traffic and have to bundle communication up in messages, instead of simply accessing shared memory. If the machine doesn't match the code's attributes, it makes programming much more difficult.

The biggest challenge comes because application codes vary significantly in their requirement. To scale an application it must have some form of parallelism. Many HPC applications have rich, SIMD-style data-level parallelism. They perform similar operations on arrays of data and can significantly accelerate execution, using fine-grained parallelism. Other application can take advantage of thread-level parallelism. This enables many separate threads to execute independently. This parallelism may be found at multiple levels in the code, allowing significant acceleration via multithreading. Some parts of applications are not parallel at all and need fast serial scalar execution speed, as slow serial performance will drag down performance (Amdahl's Law). Applications also vary in their memory and network bandwidth needs — low vs. high, dense vs. sparse.

According to Cray, the Cascade project has a core mission to ease the development of parallel codes. It will support legacy programming models MPI, OpenMP, as well as improved variants SHMEM, UPC and CAF. In addition, it is developing a new alternative global view, with languages such as Chapel and GMA. It will provide programming tools to ease debugging, tuning and performance analysis. Cray in the Cascade project is designing an adaptive, configurable machine that can match the attributes of a wide variety of applications: fast serial performance, data-level parallelism, multithreading parallelism, as well as regular and sparse bandwidth of varying intensities. The overall objective is delivering a significant increase in performance. These attributes also ease programming and should make the machine much more broadly applicable.

For modern, large-scale systems most hardware cost is in the interconnect packages — circuit boards, connectors, wires, routers, electro-optics, fibers and so on. The task is to make global bandwidth less costly and provide dynamic reconfiguration to match interconnects to customer needs. The challenge is to push signaling rates as much as possible, using the least expensive technology at each level (electrical, optics), design routers that use all network links well and use efficient network topologies.

According to Steve Scott, for ease of programming, global shared memory is unbeatable. It provides the lowest latency communication and lowest overhead communication. It enables fine-grained overlap of computation and communication and tolerates latency with processor concurrency. In contrast, message passing concurrency is constraining and hard to program. Vectors provide concurrency within a thread, multithreading provides concurrency between threads. The challenge is to exploit locality, to reduce bandwidth demand. This is done using hierarchical processor architectures, to enhance temporal locality and lightweight thread migration, to exploit spatial locality. Other techniques to reduce network traffic, such as atomic memory operations and single word network transfers when no locality is present, are also used.

In order to exploit the Cascade architecture in an optimal fashion, Cray specified and is implementing a new high productivity language, named Chapel. Current parallel languages tend to require fragmentation of data and control. They fail to cleanly isolate computation from its virtual processor topology. Also, they tend to support a single type of data parallelism, or, task parallelism and fail to support a composition of parallelism. In short, they have few data abstractions.

On the other hand, Chapel was designed as a language for rapid development of new codes. It supports abstractions for data, task parallelism, arrays (sparse, hierarchical, etc.), graphs, hash tables and so on. Most importantly, it has the ability to evolve prototype code into production very quickly.

Thus, the Cascade project addresses performance by providing configurable high-bandwidth memory and interconnects, globally-addressable memory with fine-grain synchronization and heterogeneous processing to match application needs. It preserves portability with Linux-based OS, standard POSIX API and Linux services. It also provides support for mixed legacy languages and programming models. In addition, Chapel provides an architecturally neutral path forward for code.

In summary, the benefits of Cray's adaptive supercomputing vision are:

  • It provides significant application performance improvement by leveraging many forms of parallelism.
  • It potentially increases productivity by creating a transparent interface to multiple processor types.
  • It provides a familiar Linux user environment.
  • It addresses a wider variety of applications.
  • It creates a low-cost test-bed for experimentation on custom processor technologies.

As stated at the beginning of this article: “Adaptive supercomputing is necessary to support the future needs of HPC users, as their need for higher performance on more complex applications outpaces Moore's Law. Adaptive supercomputing will cause a paradigm shift in the way users select and use HPC systems. Cray's experience, existing investments and innovative technologies position Cray to deliver on the adaptive supercomputing vision,” says Steve Scott.

In my view, the Cray vision for adaptive supercomputing is exciting and the phased strategy is very sensible. The big challenge in the next few years is how to manage the extra complexity both at software and hardware level so that enhanced productivity is delivered to the user application, transparently. Nevertheless, the mission is clear. Cross the one petaflop barrier by the end of this decade.

—–

Copyright (c) Christopher Lazou, HiPerCom Consultants, Ltd., UK. March 2006. Brands and names are the property of their respective owners.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, remain in first and second place. The only new entrants in t Read more…

By Tiffany Trader

ScaleMatrix and Nvidia Launch ‘Deploy Anywhere’ DGX HPC and AI in a Controlled Enclosure

November 18, 2019

HPC and AI in a phone booth: ScaleMatrix and Nvidia announced today at the SC19 conference in Denver a joint offering that puts up to 13 petaflops of Nvidia DGX-1 compute power in an air conditioned, water-cooled ScaleMa Read more…

By Doug Black

HPE and NREL Collaborate on AI Ops to Accelerate Exascale Efficiency and Resilience

November 18, 2019

The ever-expanding complexity of high-performance computing continues to elevate the concerns posed by massive energy consumption and increasing points of failure. Now, the AI Ops collaboration between Hewlett Packard En Read more…

By Oliver Peckham

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first planned U.S. exascale computer. Intel also provided a glimpse of Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutting for the Expo Hall opening is Monday at 6:45pm, with the Read more…

By Tiffany Trader

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Data Management – The Key to a Successful AI Project

 

Five characteristics of an awesome AI data infrastructure

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19!]

AI is powered by data

While neural networks seem to get all the glory, data is the unsung hero of AI projects – data lies at the heart of everything from model training to tuning to selection to validation. Read more…

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently deputy director for the U.S. Department of Energy’s (DOE) Exascale Computing Project (ECP), Diachin is also... Read more…

By Doug Black

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

ScaleMatrix and Nvidia Launch ‘Deploy Anywhere’ DGX HPC and AI in a Controlled Enclosure

November 18, 2019

HPC and AI in a phone booth: ScaleMatrix and Nvidia announced today at the SC19 conference in Denver a joint offering that puts up to 13 petaflops of Nvidia DGX Read more…

By Doug Black

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

SC19’s HPC Impact Showcase Chair: AI + HPC a ‘Speed Train’

November 16, 2019

This year’s chair of the HPC Impact Showcase at the SC19 conference in Denver is Lori Diachin, who has spent her career at the spearhead of HPC. Currently deputy director for the U.S. Department of Energy’s (DOE) Exascale Computing Project (ECP), Diachin is also... Read more…

By Doug Black

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This