DARPA Selects Cray and IBM for Final Phase of HPCS

By Michael Feldman

November 24, 2006

This week, the Defense Advanced Research Projects Agency (DARPA) selected Cray and IBM as the two Phase III developers for the High Productivity Computing Systems (HPCS) program. Initiated in 2002, the program is designed to produce a new generation of cost-effective, highly productive petascale systems for national security, scientific research and industrial users. The first two phases of HPCS were devoted to critical concept studies and assessments, preliminary research and development, and risk reduction engineering. Over the next four years, the third and final phase of the program will encompass development and demonstration of the HPCS technologies, culminating in a prototype system by each of the two vendors in 2010.

“This is a great day for Cray and the worldwide supercomputing community,” said Peter Ungaro, Cray's president and CEO. “The DARPA HPCS program is an important force that is shaping the future of HPC and the entire computer industry. With this Phase III award, DARPA has recognized Cray as a leading innovator with the technology, vision and expertise required to deliver world-class, revolutionary supercomputing systems.”

“IBM, DARPA and the mission partners will collaborate to develop a powerful and innovative design that will enhance the ability of supercomputers to help government, businesses and individuals,” said Bill Zeitler, senior vice president, IBM Systems and Technology Group. “We believe this new system will accelerate scientific breakthroughs, improve our nation's competitiveness and create new market opportunities.”

The DARPA-led program will use money contributed by the NSA and DOE to help fund the effort. Over the next four years, Cray will receive $250 million for their effort, while IBM will receive $244 million. The vendors and their contractors are also expected to make substantial investments in their own systems. According to DARPA, both IBM and Cray are obligated to provide at least 50 percent of the government funding amount in company cost-share.

“The vendors would not be producing these systems where it not for the investment by DARPA.” said HPCS program manager William Harrod, in a DARPA conference call on Wednesday. “They of course would have product lines, but they would not be nearly as aggressive in terms of performance and the ability to deliver productivity to their custoners. The key here is the ability to deliver productivity to the users. One can construct large systems, but then using them and getting performance out of them is the significant challenge. That's the problem we're trying to attack here.”

Harrod noted that high productivity computing will be a key technology for meeting our national security requirements and to enhance our economic competitiveness. “High productivity computing contributes substantially to the design and development of advanced vehicles and weapons, planning and execution of operational military scenarios, the intelligence problems of cryptanalysis and image processing, the maintenance of our nuclear stockpile, and is a key enabler for science and discovery in security-related fields,” he said.

In Phase III of the program, Cray and IBM will complete the hardware and software designs and technical development of their respective systems. The intention is to create machines capable of two petaflops of sustained performance, scalable to four petaflops. This represents a 10-fold performance increase compared to what was available in 2002. Even more significant is the requirement to increase the GUPS (Giga Updates Per Second) performance, which measures a system's ability to perform random memory accesses. This is especially important for applications which process irregular data structures, such as certain critical national security applications. The goal is to achieve GUPS performance of between 8,000 and 64,000. The current high mark goes to IBM Blue Gene/L, which achieves just 35 GUPS.

DARPA has specified some important HPCS Phase III milestones:

   1. Critical design review for software in 18 months.
   2. Critical design review for hardware in 30 months.
   3. Subsystem demonstration in December 2009.
   4. Final prototypes due in 2010.

DARPA will require that the prototype systems developed under HPCS to be at least one quarter of the size needed by the agency's mission partners — the NSA, DOE and NNSA. By the end of 2010, both Cray and IBM will have to demonstrate functional systems that will be evaluated by selected government HPC users.

To ensure economic viability, both vendors will be required to prepare a business plan for the development and commercialization of their products. The idea is to make the new technologies applicable to a range of systems, not just high-end government deployments. By ensuring that these systems are commercially viable, the government will not be the sole customer and thus, will not have to bear the entire burden of driving the evolution of these technologies.

Cray Cascade

The HPCS system being developed by Cray, called Cascade, is based on the company's vision of 'Adaptive Computing', a heterogeneous processing model in which the system software and the compiler/runtime code will assume the responsibility of mapping user applications on to the underlying processor hardware. Cascade will feature extremely high bandwidth global memory, advanced synchronization and multiple processor architectures (scalar, vector, multithreaded, and hardware acclerator). Cray says they will exploit the technology of a variety of partners in areas such as software tools and compilers (The Portand Group – PGI), file systems (Cluster File Systems), and storage (DataDirect Networks). In addition, Cray will rely heavily on AMD's multi-core Opteron processor and HyperTransport technologies.

Though Cray could not commit to the level of Opteron technology for the prototype in the 2010 time frame, 8-core AMD processors are expected to be available within the next two or three years. And while there are no specific plans to use AMD's ATI-derived GPU technology today, Cray CEO Peter Ungaro said that they are looking forward to working with AMD to bring their GPU technology into the Cascade system, as another accelerator option.

Over the next four years, Cray will incorporate elements of the Cascade program into commercially available products, including the peak-petaflops supercomputer, code-named “Baker,” that will be delivered to the Department of Energy's Oak Ridge National Laboratory (ORNL). In addition, ORNL will be one of Cray's Phase III partners, focused around scaling from both the systems perspective and the performance of key applications.

IBM PERCS

For the IBM HPCS effort, called PERCS, the company plans to make use of their POWER-based computer technologies. According to IBM, the DARPA award will substantially increase research and development activities into mainline IBM technologies planned to be delivered in 2010 and beyond, such as the next generation POWER7 processor, the AIX operating system, IBM's General Parallel File System, IBM's Parallel Environment and IBM's Interconnect and Storage Subsystems — technologies that are driving IBM's commercial product portfolio. IBM also plans to develop a robust HPC software stack and development tools to improve programmer productivity.

“These DARPA initiatives will propel IBM to far exceed the traditional 2X performance improvement over 18 months,” said Ravi Arimilli, IBM Fellow and Principal Investigator of POWER7. “We are embarking on a bold journey to deliver a 100X improvement in sustained performance over 48 months with a simpler and easy to use platform. Harnessing the development capabilities of IBM towards this disruptive design will drive the frontiers of science and business.”

Challenges Ahead

The difficulties of accomplishing all this in four years are considerable. At last week's “High Productivity Computing and Usable Petascale Systems” panel at SC06 in Tampa, Panelists Steve Scott (Cray), Rama Govindaraju (IBM), Jim Mitchell (Sun Microsystems) and Bob Lucas (University of Southern California) gave their perspectives on the challenges of DARPA's HPCS program. Jeremy Kepner (MIT Lincoln Laboratory) organized and chaired the panel and also participated in the discussion. Among the group, there was broad consensus about the the biggest challenges for HPCS systems.

It was generally agreed that petascale software (system and application) is trailing petascale hardware. The complexity of programming at this level will limit immediate exploitation of these systems. It was pointed out that increasing (peak) FLOPS is relatively easy to accomplish by just adding more floating-point hardware, but to use the FLOPS productively requires software that can be parallelized. Sun's Mitchell said that until the software scales, these systems will only be used as capacity machines – and rather expensive ones at that. Govindaraju agreed by noting that “peak performance is growing away from sustained performance.”

On the positive side, the use of a flatter memory hierarchy will increase performance and be easier to program compared to the distributed memory model in cluster architectures. This will help to raise the level of software abstraction, one of the key enablers for high productivity. However, the HPCS languages themselves, which are still under development, are not short-term solutions. In the interim, the more established Paritioned Global Address Space (PGAS) Languages, such as Universal Parallel C (UPC), Co-Array Fortran (CAF) and Titanium need to be made more available to give developers access to more productive software environments.

Another common theme from the panel involved system reliability of extremely large machines. When systems scale to tens of thousands or hundreds of thousands of processors and hundreds of terabytes of memory, the MTBF rates are such that a systematic approach must be developed in order to manage component failure — or as Mitchell observed, the ability to “compute through failure.” Ideally this means that hardware and system software mechanisms must be in place to insulate application code from system-level failures. Many of the panelists thought that software resiliency was probably the most important software technology for HPCS systems.

Cray CTO Steve Scott offered a message of optimism. He observed that petaflop hardware is only two years away and the importance of increasing global bandwidth, scaling software for multi-core processors and establishing system resiliency is well-understood, if not yet solved. The development of the hardware, operating system software and programming language environments are all underway. “I think we're going to get there,” concluded Scott.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

At SC19: What Is UrgentHPC and Why Is It Needed?

November 14, 2019

The UrgentHPC workshop, taking place Sunday (Nov. 17) at SC19, is focused on using HPC and real-time data for urgent decision making in response to disasters such as wildfires, flooding, health emergencies, and accidents. We chat with organizer Nick Brown, research fellow at EPCC, University of Edinburgh, to learn more. Read more…

By Tiffany Trader

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

NCSA Industry Conference Recap – Part 1

November 13, 2019

Industry Program Director Brendan McGinty welcomed guests to the annual National Center for Supercomputing Applications (NCSA) Industry Conference, October 8-10, on the University of Illinois campus in Urbana (UIUC). One hundred seventy from 40 organizations attended the invitation-only, two-day event. Read more…

By Elizabeth Leake, STEM-Trek

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing components with Intel Xeon, AMD Epyc, IBM Power, and Arm server ch Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Help HPC Work Smarter and Accelerate Time to Insight

 

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19]

To recklessly misquote Jane Austen, it is a truth, universally acknowledged, that a company in possession of a highly complex problem must be in want of a massive technical computing cluster. Read more…

SIA Recognizes Robert Dennard with 2019 Noyce Award

November 12, 2019

If you don’t know what Dennard Scaling is, the chances are strong you don’t labor in electronics. Robert Dennard, longtime IBM researcher, inventor of the DRAM and the fellow for whom Dennard Scaling was named, is th Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researchers of Europe’s NEXTGenIO project, an initiative funded by the European Commission’s Horizon 2020 program to explore this new... Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This