DARPA Selects Cray and IBM for Final Phase of HPCS

By Michael Feldman

November 24, 2006

This week, the Defense Advanced Research Projects Agency (DARPA) selected Cray and IBM as the two Phase III developers for the High Productivity Computing Systems (HPCS) program. Initiated in 2002, the program is designed to produce a new generation of cost-effective, highly productive petascale systems for national security, scientific research and industrial users. The first two phases of HPCS were devoted to critical concept studies and assessments, preliminary research and development, and risk reduction engineering. Over the next four years, the third and final phase of the program will encompass development and demonstration of the HPCS technologies, culminating in a prototype system by each of the two vendors in 2010.

“This is a great day for Cray and the worldwide supercomputing community,” said Peter Ungaro, Cray's president and CEO. “The DARPA HPCS program is an important force that is shaping the future of HPC and the entire computer industry. With this Phase III award, DARPA has recognized Cray as a leading innovator with the technology, vision and expertise required to deliver world-class, revolutionary supercomputing systems.”

“IBM, DARPA and the mission partners will collaborate to develop a powerful and innovative design that will enhance the ability of supercomputers to help government, businesses and individuals,” said Bill Zeitler, senior vice president, IBM Systems and Technology Group. “We believe this new system will accelerate scientific breakthroughs, improve our nation's competitiveness and create new market opportunities.”

The DARPA-led program will use money contributed by the NSA and DOE to help fund the effort. Over the next four years, Cray will receive $250 million for their effort, while IBM will receive $244 million. The vendors and their contractors are also expected to make substantial investments in their own systems. According to DARPA, both IBM and Cray are obligated to provide at least 50 percent of the government funding amount in company cost-share.

“The vendors would not be producing these systems where it not for the investment by DARPA.” said HPCS program manager William Harrod, in a DARPA conference call on Wednesday. “They of course would have product lines, but they would not be nearly as aggressive in terms of performance and the ability to deliver productivity to their custoners. The key here is the ability to deliver productivity to the users. One can construct large systems, but then using them and getting performance out of them is the significant challenge. That's the problem we're trying to attack here.”

Harrod noted that high productivity computing will be a key technology for meeting our national security requirements and to enhance our economic competitiveness. “High productivity computing contributes substantially to the design and development of advanced vehicles and weapons, planning and execution of operational military scenarios, the intelligence problems of cryptanalysis and image processing, the maintenance of our nuclear stockpile, and is a key enabler for science and discovery in security-related fields,” he said.

In Phase III of the program, Cray and IBM will complete the hardware and software designs and technical development of their respective systems. The intention is to create machines capable of two petaflops of sustained performance, scalable to four petaflops. This represents a 10-fold performance increase compared to what was available in 2002. Even more significant is the requirement to increase the GUPS (Giga Updates Per Second) performance, which measures a system's ability to perform random memory accesses. This is especially important for applications which process irregular data structures, such as certain critical national security applications. The goal is to achieve GUPS performance of between 8,000 and 64,000. The current high mark goes to IBM Blue Gene/L, which achieves just 35 GUPS.

DARPA has specified some important HPCS Phase III milestones:

   1. Critical design review for software in 18 months.
   2. Critical design review for hardware in 30 months.
   3. Subsystem demonstration in December 2009.
   4. Final prototypes due in 2010.

DARPA will require that the prototype systems developed under HPCS to be at least one quarter of the size needed by the agency's mission partners — the NSA, DOE and NNSA. By the end of 2010, both Cray and IBM will have to demonstrate functional systems that will be evaluated by selected government HPC users.

To ensure economic viability, both vendors will be required to prepare a business plan for the development and commercialization of their products. The idea is to make the new technologies applicable to a range of systems, not just high-end government deployments. By ensuring that these systems are commercially viable, the government will not be the sole customer and thus, will not have to bear the entire burden of driving the evolution of these technologies.

Cray Cascade

The HPCS system being developed by Cray, called Cascade, is based on the company's vision of 'Adaptive Computing', a heterogeneous processing model in which the system software and the compiler/runtime code will assume the responsibility of mapping user applications on to the underlying processor hardware. Cascade will feature extremely high bandwidth global memory, advanced synchronization and multiple processor architectures (scalar, vector, multithreaded, and hardware acclerator). Cray says they will exploit the technology of a variety of partners in areas such as software tools and compilers (The Portand Group – PGI), file systems (Cluster File Systems), and storage (DataDirect Networks). In addition, Cray will rely heavily on AMD's multi-core Opteron processor and HyperTransport technologies.

Though Cray could not commit to the level of Opteron technology for the prototype in the 2010 time frame, 8-core AMD processors are expected to be available within the next two or three years. And while there are no specific plans to use AMD's ATI-derived GPU technology today, Cray CEO Peter Ungaro said that they are looking forward to working with AMD to bring their GPU technology into the Cascade system, as another accelerator option.

Over the next four years, Cray will incorporate elements of the Cascade program into commercially available products, including the peak-petaflops supercomputer, code-named “Baker,” that will be delivered to the Department of Energy's Oak Ridge National Laboratory (ORNL). In addition, ORNL will be one of Cray's Phase III partners, focused around scaling from both the systems perspective and the performance of key applications.

IBM PERCS

For the IBM HPCS effort, called PERCS, the company plans to make use of their POWER-based computer technologies. According to IBM, the DARPA award will substantially increase research and development activities into mainline IBM technologies planned to be delivered in 2010 and beyond, such as the next generation POWER7 processor, the AIX operating system, IBM's General Parallel File System, IBM's Parallel Environment and IBM's Interconnect and Storage Subsystems — technologies that are driving IBM's commercial product portfolio. IBM also plans to develop a robust HPC software stack and development tools to improve programmer productivity.

“These DARPA initiatives will propel IBM to far exceed the traditional 2X performance improvement over 18 months,” said Ravi Arimilli, IBM Fellow and Principal Investigator of POWER7. “We are embarking on a bold journey to deliver a 100X improvement in sustained performance over 48 months with a simpler and easy to use platform. Harnessing the development capabilities of IBM towards this disruptive design will drive the frontiers of science and business.”

Challenges Ahead

The difficulties of accomplishing all this in four years are considerable. At last week's “High Productivity Computing and Usable Petascale Systems” panel at SC06 in Tampa, Panelists Steve Scott (Cray), Rama Govindaraju (IBM), Jim Mitchell (Sun Microsystems) and Bob Lucas (University of Southern California) gave their perspectives on the challenges of DARPA's HPCS program. Jeremy Kepner (MIT Lincoln Laboratory) organized and chaired the panel and also participated in the discussion. Among the group, there was broad consensus about the the biggest challenges for HPCS systems.

It was generally agreed that petascale software (system and application) is trailing petascale hardware. The complexity of programming at this level will limit immediate exploitation of these systems. It was pointed out that increasing (peak) FLOPS is relatively easy to accomplish by just adding more floating-point hardware, but to use the FLOPS productively requires software that can be parallelized. Sun's Mitchell said that until the software scales, these systems will only be used as capacity machines – and rather expensive ones at that. Govindaraju agreed by noting that “peak performance is growing away from sustained performance.”

On the positive side, the use of a flatter memory hierarchy will increase performance and be easier to program compared to the distributed memory model in cluster architectures. This will help to raise the level of software abstraction, one of the key enablers for high productivity. However, the HPCS languages themselves, which are still under development, are not short-term solutions. In the interim, the more established Paritioned Global Address Space (PGAS) Languages, such as Universal Parallel C (UPC), Co-Array Fortran (CAF) and Titanium need to be made more available to give developers access to more productive software environments.

Another common theme from the panel involved system reliability of extremely large machines. When systems scale to tens of thousands or hundreds of thousands of processors and hundreds of terabytes of memory, the MTBF rates are such that a systematic approach must be developed in order to manage component failure — or as Mitchell observed, the ability to “compute through failure.” Ideally this means that hardware and system software mechanisms must be in place to insulate application code from system-level failures. Many of the panelists thought that software resiliency was probably the most important software technology for HPCS systems.

Cray CTO Steve Scott offered a message of optimism. He observed that petaflop hardware is only two years away and the importance of increasing global bandwidth, scaling software for multi-core processors and establishing system resiliency is well-understood, if not yet solved. The development of the hardware, operating system software and programming language environments are all underway. “I think we're going to get there,” concluded Scott.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Quantum Market, Though Small, will Grow 22% and Hit $1.5B in 2026

December 7, 2023

Few markets as small as the quantum information sciences market generate as much lively discussion. Hyperion Research pegged the worldwide quantum market at $848 million for 2023 and expects it to reach ~$1.5 billion in Read more…

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed its new Instinct MI300X GPU is the fastest AI chip in the worl Read more…

Finding Opportunity in the High-Growth “AI Market” 

December 6, 2023

 “What’s the size of the AI market?” It’s a totally normal question for anyone to ask me. After all, I’m an analyst, and my company, Intersect360 Research, specializes in scalable, high-performance datacenter Read more…

Imagine a Beowulf Cluster of SuperNODEs …
(They did)

December 6, 2023

Clustering resources for faster performance is not new. In the early days of clustering, the Beowulf project demonstrated that high performance was achievable from commodity hardware. These days, the "Beowulf cluster mem Read more…

The IBM-Meta AI Alliance Promotes Safe and Open AI Progress

December 5, 2023

IBM and Meta have co-launched a massive industry-academic-government alliance to shepherd AI development. The new group has united under the AI Alliance banner to promote responsible innovation in AI. Historically, techn Read more…

AWS Solution Channel

Shutterstock 2030529413

Reezocar Rethinks Car Buying Using Computer Vision and ML on AWS

Overview

Every car that finds its way to a landfill marks another dent in the fight for a sustainable future. Reezocar, an online hub for buying and selling used cars, has a mission to change this. Read more…

QCT Solution Channel

QCT and Intel Codeveloped QCT DevCloud Program to Jumpstart HPC and AI Development

Organizations and developers face a variety of issues in developing and testing HPC and AI applications. Challenges they face can range from simply having access to a wide variety of hardware, frameworks, and toolkits to time spent on installation, development, testing, and troubleshooting which can lead to increases in cost. Read more…

ChatGPT Friendly Programming Languages
(hello-world.llm)

December 4, 2023

 Using OpenAI's ChatGPT to write code is an alluring goal. Describing "what to" solve, but not "how to solve" would be a huge breakthrough in computer programming. Alas, we are nowhere near this capability. In particula Read more…

Quantum Market, Though Small, will Grow 22% and Hit $1.5B in 2026

December 7, 2023

Few markets as small as the quantum information sciences market generate as much lively discussion. Hyperion Research pegged the worldwide quantum market at $84 Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Finding Opportunity in the High-Growth “AI Market” 

December 6, 2023

 “What’s the size of the AI market?” It’s a totally normal question for anyone to ask me. After all, I’m an analyst, and my company, Intersect360 Res Read more…

Imagine a Beowulf Cluster of SuperNODEs …
(They did)

December 6, 2023

Clustering resources for faster performance is not new. In the early days of clustering, the Beowulf project demonstrated that high performance was achievable f Read more…

The IBM-Meta AI Alliance Promotes Safe and Open AI Progress

December 5, 2023

IBM and Meta have co-launched a massive industry-academic-government alliance to shepherd AI development. The new group has united under the AI Alliance banner Read more…

Shutterstock 1336284338

ChatGPT Friendly Programming Languages
(hello-world.llm)

December 4, 2023

 Using OpenAI's ChatGPT to write code is an alluring goal. Describing "what to" solve, but not "how to solve" would be a huge breakthrough in computer programm Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

The Annual SCinet Mandala

November 30, 2023

Perhaps you have seen images of Tibetan Buddhists creating beautiful and intricate images with colored sand. These sand mandalas can take weeks to create, only Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Contributors

SC23 Booth Videos

Achronix @ SC23
AMD @ SC23
AWS @ SC23
Altair @ SC23
CoolIT @ SC23
Cornelis Networks @ SC23
CoreHive @ SC23
DDC @ SC23
HPE @ SC23 with Justin Hotard
HPE @ SC23 with Trish Damkroger
Intel @ SC23
Intelligent Light @ SC23
Lenovo @ SC23
Penguin Solutions @ SC23
QCT Intel @ SC23
Tyan AMD @ SC23
Tyan Intel @ SC23
HPCwire LIVE from SC23 Playlist

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire