DARPA Selects Cray and IBM for Final Phase of HPCS

By Michael Feldman

November 24, 2006

This week, the Defense Advanced Research Projects Agency (DARPA) selected Cray and IBM as the two Phase III developers for the High Productivity Computing Systems (HPCS) program. Initiated in 2002, the program is designed to produce a new generation of cost-effective, highly productive petascale systems for national security, scientific research and industrial users. The first two phases of HPCS were devoted to critical concept studies and assessments, preliminary research and development, and risk reduction engineering. Over the next four years, the third and final phase of the program will encompass development and demonstration of the HPCS technologies, culminating in a prototype system by each of the two vendors in 2010.

“This is a great day for Cray and the worldwide supercomputing community,” said Peter Ungaro, Cray's president and CEO. “The DARPA HPCS program is an important force that is shaping the future of HPC and the entire computer industry. With this Phase III award, DARPA has recognized Cray as a leading innovator with the technology, vision and expertise required to deliver world-class, revolutionary supercomputing systems.”

“IBM, DARPA and the mission partners will collaborate to develop a powerful and innovative design that will enhance the ability of supercomputers to help government, businesses and individuals,” said Bill Zeitler, senior vice president, IBM Systems and Technology Group. “We believe this new system will accelerate scientific breakthroughs, improve our nation's competitiveness and create new market opportunities.”

The DARPA-led program will use money contributed by the NSA and DOE to help fund the effort. Over the next four years, Cray will receive $250 million for their effort, while IBM will receive $244 million. The vendors and their contractors are also expected to make substantial investments in their own systems. According to DARPA, both IBM and Cray are obligated to provide at least 50 percent of the government funding amount in company cost-share.

“The vendors would not be producing these systems where it not for the investment by DARPA.” said HPCS program manager William Harrod, in a DARPA conference call on Wednesday. “They of course would have product lines, but they would not be nearly as aggressive in terms of performance and the ability to deliver productivity to their custoners. The key here is the ability to deliver productivity to the users. One can construct large systems, but then using them and getting performance out of them is the significant challenge. That's the problem we're trying to attack here.”

Harrod noted that high productivity computing will be a key technology for meeting our national security requirements and to enhance our economic competitiveness. “High productivity computing contributes substantially to the design and development of advanced vehicles and weapons, planning and execution of operational military scenarios, the intelligence problems of cryptanalysis and image processing, the maintenance of our nuclear stockpile, and is a key enabler for science and discovery in security-related fields,” he said.

In Phase III of the program, Cray and IBM will complete the hardware and software designs and technical development of their respective systems. The intention is to create machines capable of two petaflops of sustained performance, scalable to four petaflops. This represents a 10-fold performance increase compared to what was available in 2002. Even more significant is the requirement to increase the GUPS (Giga Updates Per Second) performance, which measures a system's ability to perform random memory accesses. This is especially important for applications which process irregular data structures, such as certain critical national security applications. The goal is to achieve GUPS performance of between 8,000 and 64,000. The current high mark goes to IBM Blue Gene/L, which achieves just 35 GUPS.

DARPA has specified some important HPCS Phase III milestones:

   1. Critical design review for software in 18 months.
   2. Critical design review for hardware in 30 months.
   3. Subsystem demonstration in December 2009.
   4. Final prototypes due in 2010.

DARPA will require that the prototype systems developed under HPCS to be at least one quarter of the size needed by the agency's mission partners — the NSA, DOE and NNSA. By the end of 2010, both Cray and IBM will have to demonstrate functional systems that will be evaluated by selected government HPC users.

To ensure economic viability, both vendors will be required to prepare a business plan for the development and commercialization of their products. The idea is to make the new technologies applicable to a range of systems, not just high-end government deployments. By ensuring that these systems are commercially viable, the government will not be the sole customer and thus, will not have to bear the entire burden of driving the evolution of these technologies.

Cray Cascade

The HPCS system being developed by Cray, called Cascade, is based on the company's vision of 'Adaptive Computing', a heterogeneous processing model in which the system software and the compiler/runtime code will assume the responsibility of mapping user applications on to the underlying processor hardware. Cascade will feature extremely high bandwidth global memory, advanced synchronization and multiple processor architectures (scalar, vector, multithreaded, and hardware acclerator). Cray says they will exploit the technology of a variety of partners in areas such as software tools and compilers (The Portand Group – PGI), file systems (Cluster File Systems), and storage (DataDirect Networks). In addition, Cray will rely heavily on AMD's multi-core Opteron processor and HyperTransport technologies.

Though Cray could not commit to the level of Opteron technology for the prototype in the 2010 time frame, 8-core AMD processors are expected to be available within the next two or three years. And while there are no specific plans to use AMD's ATI-derived GPU technology today, Cray CEO Peter Ungaro said that they are looking forward to working with AMD to bring their GPU technology into the Cascade system, as another accelerator option.

Over the next four years, Cray will incorporate elements of the Cascade program into commercially available products, including the peak-petaflops supercomputer, code-named “Baker,” that will be delivered to the Department of Energy's Oak Ridge National Laboratory (ORNL). In addition, ORNL will be one of Cray's Phase III partners, focused around scaling from both the systems perspective and the performance of key applications.

IBM PERCS

For the IBM HPCS effort, called PERCS, the company plans to make use of their POWER-based computer technologies. According to IBM, the DARPA award will substantially increase research and development activities into mainline IBM technologies planned to be delivered in 2010 and beyond, such as the next generation POWER7 processor, the AIX operating system, IBM's General Parallel File System, IBM's Parallel Environment and IBM's Interconnect and Storage Subsystems — technologies that are driving IBM's commercial product portfolio. IBM also plans to develop a robust HPC software stack and development tools to improve programmer productivity.

“These DARPA initiatives will propel IBM to far exceed the traditional 2X performance improvement over 18 months,” said Ravi Arimilli, IBM Fellow and Principal Investigator of POWER7. “We are embarking on a bold journey to deliver a 100X improvement in sustained performance over 48 months with a simpler and easy to use platform. Harnessing the development capabilities of IBM towards this disruptive design will drive the frontiers of science and business.”

Challenges Ahead

The difficulties of accomplishing all this in four years are considerable. At last week's “High Productivity Computing and Usable Petascale Systems” panel at SC06 in Tampa, Panelists Steve Scott (Cray), Rama Govindaraju (IBM), Jim Mitchell (Sun Microsystems) and Bob Lucas (University of Southern California) gave their perspectives on the challenges of DARPA's HPCS program. Jeremy Kepner (MIT Lincoln Laboratory) organized and chaired the panel and also participated in the discussion. Among the group, there was broad consensus about the the biggest challenges for HPCS systems.

It was generally agreed that petascale software (system and application) is trailing petascale hardware. The complexity of programming at this level will limit immediate exploitation of these systems. It was pointed out that increasing (peak) FLOPS is relatively easy to accomplish by just adding more floating-point hardware, but to use the FLOPS productively requires software that can be parallelized. Sun's Mitchell said that until the software scales, these systems will only be used as capacity machines – and rather expensive ones at that. Govindaraju agreed by noting that “peak performance is growing away from sustained performance.”

On the positive side, the use of a flatter memory hierarchy will increase performance and be easier to program compared to the distributed memory model in cluster architectures. This will help to raise the level of software abstraction, one of the key enablers for high productivity. However, the HPCS languages themselves, which are still under development, are not short-term solutions. In the interim, the more established Paritioned Global Address Space (PGAS) Languages, such as Universal Parallel C (UPC), Co-Array Fortran (CAF) and Titanium need to be made more available to give developers access to more productive software environments.

Another common theme from the panel involved system reliability of extremely large machines. When systems scale to tens of thousands or hundreds of thousands of processors and hundreds of terabytes of memory, the MTBF rates are such that a systematic approach must be developed in order to manage component failure — or as Mitchell observed, the ability to “compute through failure.” Ideally this means that hardware and system software mechanisms must be in place to insulate application code from system-level failures. Many of the panelists thought that software resiliency was probably the most important software technology for HPCS systems.

Cray CTO Steve Scott offered a message of optimism. He observed that petaflop hardware is only two years away and the importance of increasing global bandwidth, scaling software for multi-core processors and establishing system resiliency is well-understood, if not yet solved. The development of the hardware, operating system software and programming language environments are all underway. “I think we're going to get there,” concluded Scott.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire