DARPA Selects Cray and IBM for Final Phase of HPCS

By Michael Feldman

November 24, 2006

This week, the Defense Advanced Research Projects Agency (DARPA) selected Cray and IBM as the two Phase III developers for the High Productivity Computing Systems (HPCS) program. Initiated in 2002, the program is designed to produce a new generation of cost-effective, highly productive petascale systems for national security, scientific research and industrial users. The first two phases of HPCS were devoted to critical concept studies and assessments, preliminary research and development, and risk reduction engineering. Over the next four years, the third and final phase of the program will encompass development and demonstration of the HPCS technologies, culminating in a prototype system by each of the two vendors in 2010.

“This is a great day for Cray and the worldwide supercomputing community,” said Peter Ungaro, Cray's president and CEO. “The DARPA HPCS program is an important force that is shaping the future of HPC and the entire computer industry. With this Phase III award, DARPA has recognized Cray as a leading innovator with the technology, vision and expertise required to deliver world-class, revolutionary supercomputing systems.”

“IBM, DARPA and the mission partners will collaborate to develop a powerful and innovative design that will enhance the ability of supercomputers to help government, businesses and individuals,” said Bill Zeitler, senior vice president, IBM Systems and Technology Group. “We believe this new system will accelerate scientific breakthroughs, improve our nation's competitiveness and create new market opportunities.”

The DARPA-led program will use money contributed by the NSA and DOE to help fund the effort. Over the next four years, Cray will receive $250 million for their effort, while IBM will receive $244 million. The vendors and their contractors are also expected to make substantial investments in their own systems. According to DARPA, both IBM and Cray are obligated to provide at least 50 percent of the government funding amount in company cost-share.

“The vendors would not be producing these systems where it not for the investment by DARPA.” said HPCS program manager William Harrod, in a DARPA conference call on Wednesday. “They of course would have product lines, but they would not be nearly as aggressive in terms of performance and the ability to deliver productivity to their custoners. The key here is the ability to deliver productivity to the users. One can construct large systems, but then using them and getting performance out of them is the significant challenge. That's the problem we're trying to attack here.”

Harrod noted that high productivity computing will be a key technology for meeting our national security requirements and to enhance our economic competitiveness. “High productivity computing contributes substantially to the design and development of advanced vehicles and weapons, planning and execution of operational military scenarios, the intelligence problems of cryptanalysis and image processing, the maintenance of our nuclear stockpile, and is a key enabler for science and discovery in security-related fields,” he said.

In Phase III of the program, Cray and IBM will complete the hardware and software designs and technical development of their respective systems. The intention is to create machines capable of two petaflops of sustained performance, scalable to four petaflops. This represents a 10-fold performance increase compared to what was available in 2002. Even more significant is the requirement to increase the GUPS (Giga Updates Per Second) performance, which measures a system's ability to perform random memory accesses. This is especially important for applications which process irregular data structures, such as certain critical national security applications. The goal is to achieve GUPS performance of between 8,000 and 64,000. The current high mark goes to IBM Blue Gene/L, which achieves just 35 GUPS.

DARPA has specified some important HPCS Phase III milestones:

   1. Critical design review for software in 18 months.
   2. Critical design review for hardware in 30 months.
   3. Subsystem demonstration in December 2009.
   4. Final prototypes due in 2010.

DARPA will require that the prototype systems developed under HPCS to be at least one quarter of the size needed by the agency's mission partners — the NSA, DOE and NNSA. By the end of 2010, both Cray and IBM will have to demonstrate functional systems that will be evaluated by selected government HPC users.

To ensure economic viability, both vendors will be required to prepare a business plan for the development and commercialization of their products. The idea is to make the new technologies applicable to a range of systems, not just high-end government deployments. By ensuring that these systems are commercially viable, the government will not be the sole customer and thus, will not have to bear the entire burden of driving the evolution of these technologies.

Cray Cascade

The HPCS system being developed by Cray, called Cascade, is based on the company's vision of 'Adaptive Computing', a heterogeneous processing model in which the system software and the compiler/runtime code will assume the responsibility of mapping user applications on to the underlying processor hardware. Cascade will feature extremely high bandwidth global memory, advanced synchronization and multiple processor architectures (scalar, vector, multithreaded, and hardware acclerator). Cray says they will exploit the technology of a variety of partners in areas such as software tools and compilers (The Portand Group – PGI), file systems (Cluster File Systems), and storage (DataDirect Networks). In addition, Cray will rely heavily on AMD's multi-core Opteron processor and HyperTransport technologies.

Though Cray could not commit to the level of Opteron technology for the prototype in the 2010 time frame, 8-core AMD processors are expected to be available within the next two or three years. And while there are no specific plans to use AMD's ATI-derived GPU technology today, Cray CEO Peter Ungaro said that they are looking forward to working with AMD to bring their GPU technology into the Cascade system, as another accelerator option.

Over the next four years, Cray will incorporate elements of the Cascade program into commercially available products, including the peak-petaflops supercomputer, code-named “Baker,” that will be delivered to the Department of Energy's Oak Ridge National Laboratory (ORNL). In addition, ORNL will be one of Cray's Phase III partners, focused around scaling from both the systems perspective and the performance of key applications.

IBM PERCS

For the IBM HPCS effort, called PERCS, the company plans to make use of their POWER-based computer technologies. According to IBM, the DARPA award will substantially increase research and development activities into mainline IBM technologies planned to be delivered in 2010 and beyond, such as the next generation POWER7 processor, the AIX operating system, IBM's General Parallel File System, IBM's Parallel Environment and IBM's Interconnect and Storage Subsystems — technologies that are driving IBM's commercial product portfolio. IBM also plans to develop a robust HPC software stack and development tools to improve programmer productivity.

“These DARPA initiatives will propel IBM to far exceed the traditional 2X performance improvement over 18 months,” said Ravi Arimilli, IBM Fellow and Principal Investigator of POWER7. “We are embarking on a bold journey to deliver a 100X improvement in sustained performance over 48 months with a simpler and easy to use platform. Harnessing the development capabilities of IBM towards this disruptive design will drive the frontiers of science and business.”

Challenges Ahead

The difficulties of accomplishing all this in four years are considerable. At last week's “High Productivity Computing and Usable Petascale Systems” panel at SC06 in Tampa, Panelists Steve Scott (Cray), Rama Govindaraju (IBM), Jim Mitchell (Sun Microsystems) and Bob Lucas (University of Southern California) gave their perspectives on the challenges of DARPA's HPCS program. Jeremy Kepner (MIT Lincoln Laboratory) organized and chaired the panel and also participated in the discussion. Among the group, there was broad consensus about the the biggest challenges for HPCS systems.

It was generally agreed that petascale software (system and application) is trailing petascale hardware. The complexity of programming at this level will limit immediate exploitation of these systems. It was pointed out that increasing (peak) FLOPS is relatively easy to accomplish by just adding more floating-point hardware, but to use the FLOPS productively requires software that can be parallelized. Sun's Mitchell said that until the software scales, these systems will only be used as capacity machines – and rather expensive ones at that. Govindaraju agreed by noting that “peak performance is growing away from sustained performance.”

On the positive side, the use of a flatter memory hierarchy will increase performance and be easier to program compared to the distributed memory model in cluster architectures. This will help to raise the level of software abstraction, one of the key enablers for high productivity. However, the HPCS languages themselves, which are still under development, are not short-term solutions. In the interim, the more established Paritioned Global Address Space (PGAS) Languages, such as Universal Parallel C (UPC), Co-Array Fortran (CAF) and Titanium need to be made more available to give developers access to more productive software environments.

Another common theme from the panel involved system reliability of extremely large machines. When systems scale to tens of thousands or hundreds of thousands of processors and hundreds of terabytes of memory, the MTBF rates are such that a systematic approach must be developed in order to manage component failure — or as Mitchell observed, the ability to “compute through failure.” Ideally this means that hardware and system software mechanisms must be in place to insulate application code from system-level failures. Many of the panelists thought that software resiliency was probably the most important software technology for HPCS systems.

Cray CTO Steve Scott offered a message of optimism. He observed that petaflop hardware is only two years away and the importance of increasing global bandwidth, scaling software for multi-core processors and establishing system resiliency is well-understood, if not yet solved. The development of the hardware, operating system software and programming language environments are all underway. “I think we're going to get there,” concluded Scott.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Help Wanted: QED-C Survey Spotlights Skills Sought by Quantum Industry

September 28, 2021

Developing an adequate workforce for the young but fast-growing quantum information sciences industry is seen as a critical element for success. Just what that means in terms of skillsets and positions is becoming cleare Read more…

Pittsburgh Supercomputer Powers Machine Learning Analysis of Rare East Asian Stamps

September 27, 2021

Setting aside the relatively recent rise of electronic signatures, personalized stamps have been a popular form of identification for formal documents in East Asia. These identifiers – easily forged, but culturally ubi Read more…

Purdue Researchers Peer into the ‘Fog of the Machine Learning Accelerator War’

September 27, 2021

Making sense of ML performance and benchmark data is an ongoing challenge. In light of last week’s release of the most recent MLPerf (v1.1) inference results, now is perhaps a good time to review how valuable (or not) Read more…

Quantum Monte Carlo at Exascale Could Be Key to Finding New Semiconductor Materials

September 27, 2021

Researchers are urgently trying to identify possible materials to replace silicon-based semiconductors. The processing power in modern computers continues to increase even as the size of the silicon on which components a Read more…

The Case for an Edge-Driven Future for Supercomputing

September 24, 2021

“Exascale only becomes valuable when it’s creating and using data that we care about,” said Pete Beckman, co-director of the Northwestern-Argonne Institute of Science and Engineering (NAISE), at the most recent HPC Read more…

AWS Solution Channel

Introducing AWS ParallelCluster 3

Running HPC workloads, like computational fluid dynamics (CFD), molecular dynamics, or weather forecasting typically involves a lot of moving parts. You need a hundreds or thousands of compute cores, a job scheduler for keeping them fed, a shared file system that’s tuned for throughput or IOPS (or both), loads of libraries, a fast network, and a head node to make sense of all this. Read more…

Three Universities Team for NSF-Funded ‘ACES’ Reconfigurable Supercomputer Prototype

September 23, 2021

As Moore’s law slows, HPC developers are increasingly looking for speed gains in specialized code and specialized hardware – but this specialization, in turn, can make testing and deploying code trickier than ever. Now, researchers from Texas A&M University, the University of Illinois at Urbana... Read more…

Purdue Researchers Peer into the ‘Fog of the Machine Learning Accelerator War’

September 27, 2021

Making sense of ML performance and benchmark data is an ongoing challenge. In light of last week’s release of the most recent MLPerf (v1.1) inference results, Read more…

Quantum Monte Carlo at Exascale Could Be Key to Finding New Semiconductor Materials

September 27, 2021

Researchers are urgently trying to identify possible materials to replace silicon-based semiconductors. The processing power in modern computers continues to in Read more…

The Case for an Edge-Driven Future for Supercomputing

September 24, 2021

“Exascale only becomes valuable when it’s creating and using data that we care about,” said Pete Beckman, co-director of the Northwestern-Argonne Institut Read more…

Three Universities Team for NSF-Funded ‘ACES’ Reconfigurable Supercomputer Prototype

September 23, 2021

As Moore’s law slows, HPC developers are increasingly looking for speed gains in specialized code and specialized hardware – but this specialization, in turn, can make testing and deploying code trickier than ever. Now, researchers from Texas A&M University, the University of Illinois at Urbana... Read more…

Qubit Stream: Monte Carlo Advance, Infosys Joins the Fray, D-Wave Meeting Plans, and More

September 23, 2021

It seems the stream of quantum computing reports never ceases. This week – IonQ and Goldman Sachs tackle Monte Carlo on quantum hardware, Cambridge Quantum pu Read more…

Asetek Announces It Is Exiting HPC to Protect Future Profitability

September 22, 2021

Liquid cooling specialist Asetek, well-known in HPC circles for its direct-to-chip cooling technology that is inside some of the fastest supercomputers in the world, announced today that it is exiting the HPC space amid multiple supply chain issues related to the pandemic. Although pandemic supply chain... Read more…

TACC Supercomputer Delves Into Protein Interactions

September 22, 2021

Adenosine triphosphate (ATP) is a compound used to funnel energy from mitochondria to other parts of the cell, enabling energy-driven functions like muscle contractions. For ATP to flow, though, the interaction between the hexokinase-II (HKII) enzyme and the proteins found in a specific channel on the mitochondria’s outer membrane. Now, simulations conducted on supercomputers at the Texas Advanced Computing Center (TACC) have simulated... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Leading Solution Providers

Contributors

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire