DOE Primes Pump for Exascale Supercomputers

By Michael Feldman

July 12, 2012

Intel, AMD, NVIDIA, and Whamcloud have been awarded tens of millions of dollars by the US Department of Energy (DOE) to kick-start research and development required to build exascale supercomputers. The work will be performed under the FastForward program, a joint effort run by the DOE Office of Science and the National Nuclear Security Administration (NNSA) that will focus on developing future hardware and software technologies capable of supporting such machines.

The program is being contracted through Lawrence Livermore National Security, LLC as part of a multi-lab consortium that includes Argonne National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Oak Ridge National Laboratory, Pacific Northwest National Laboratory, and Sandia National Laboratories.

Although we’re only six to eight years away from the first exaflops systems, the DOE’s primary exascale program has yet to be funded. (And since this is an election year in the US, such funding will probably not fall into place until 2013.) In the interim, FastForward was devised in order to begin the needed R&D for some of the exascale foundational technologies, in particular, processors, memory and storage.

At least some of the impetus for the program came from the vendors themselves. According to Mark Seager, Intel’s CTO for the company’s High Performance Computing Ecosystem group, the DOE was told by multiple commercial partners that research for the component pieces needed to get underway this year if they hoped to field an exascale machine by 2020. That led to the formation of the program, and apparently there was enough loose change rolling around at the Office of Science and NNSA to fund this more modest effort.

Although all the FastForward subcontracts have yet to be made public, as of today there are four known awards:

  • Intel: $19 million for both processor and memory technologies
  • AMD: $12.6 million for processor and memory technologies
  • NVIDIA: $12 million for processor technology
  • Whamcloud (along with EMC, Cray and HDF Group): Unknown dollar amount for storage and I/O technologies

Although the work is not intended to fund the development of “near-term capabilities” that are already on vendors’ existing product roadmaps, all of this work will be based upon ongoing R&D efforts at these companies. The DOE is fine with this since the commercialization of these technologies is really the only way these government agencies can be assured of cost-effective exascale machines. The FastForward statement of work makes a point of spelling out this arrangement, thusly: “While DOE’s extreme-scale computer requirements are a driving factor, these projects must also exhibit the potential for technology adoption by broader segments of the market outside of DOE supercomputer installations.”

For example, Intel’s FastForward processor work will be based on the company’s MIC (Many Integrated Core) architecture, which the company is initially aiming at the supercomputing market, but with the intent to extend it into big data business applications and beyond. The first MIC product, under the Xeon Phi brand, is scheduled to be launched before the end of 2012, but this initial offering is at least a couple of generations away from supporting exascale-capable machines. According to Seager, a future processor of this kind will need much improved energy efficiency, a revamped memory interface, and higher resiliency.

Although the x86 ISA will be retained, this future MIC architecture will incorporate some “radical approaches” to bring the technology into the exascale realm. To begin with, says Seager, that means reducing its power draw two to three times greater than what would naturally be achieved with transistor shrinkage over the rest of the decade. “It’s a daunting challenge to do better than what Moore’s Law will give you,” Seager told HPCwire.

Fortunately, he says, Intel will be able to leverage its near-threshold voltage circuitry research, some of which was funded under UHPC (Ubiquitous High Performance Computing), DARPA’s now defunct exascale program. Shekhar Borkar, who was the PI for the UHPC work, along with Seager and former IBM’er Al Gara, will be heading up the FastForward work at Intel.

For the exascale memory subcontract, Intel will be leveraging its work with Micron Technology on the Hybrid Memory Cube. The idea is to use similar technology to incorporate 3D stacks of memory chips into the same package as the processor. In-package integration shortens the distance considerably between the processor and the memory, which significantly increases bandwidth and lowers latency. At the same time, cache management is going to be redesigned to optimize the power-performance of memory reads and writes.

Like Intel, AMD will be basing its FastForward processor research on a current design, in this case the company’s APU (Accelerated Processing Unit) product line and the related Heterogeneous Systems Architecture (HSA) standard — that according to Alan Lee, AMD’s corporate vice president for Advanced Research and Development. The current crop of APUs, which integrate CPUs and GPUs on-chip, are aimed at consumer devices, such as laptops, netbooks, and other mobile gear. But AMD has designs on extending its heterogeneous portfolio into the server arena, and the DOE just gave them about 12 million more reasons to do so.

Since AMD first needs to transform their APU into a server design, the chipmaker has a somewhat different, and perhaps longer path to exascale than Intel, which is at least starting with server-ready silicon. On the other hand the MIC architecture is not heterogenous (and may never be), so AMD does have a certain advantage there. “That is the truly unique technology and the strongest one that AMD brings to bear — that we have a world-class CPU and GPU brought together in a single APU,” says Lee.

Lee was less forthcoming about the starting point for the memory research under the FastForward work, other than to say it would be optimizing the technology around its heterogeneous architecture and would involve high-speed interconnects as well as different types and arrangements of memory.

More than anything, Lee sees this R&D work as producing dividends in other areas of AMD’s business. He says the fundamental technologies that the DOE wants for exascale are those the computer industry needs, not just in the future, but right now, referring to the big data domain, in particular. “I expect that a lot of the technology that you see us develop has the potential to make it into a variety of different server products of different genres,” says Lee.

To counterbalance the Intel and AMD work, is NVIDIA, which will be using the company’s Echelon design as the starting point for its FastForward work. Echelon, which was also funded under DARPA’s UHPC program, is based on a future 20-teraflop microprocessor that integrates 128 streaming processors, 8 latency (CPU-type) processors, and 256MB of SRAM memory on-chip. The technology is in line to follow Maxwell, NVIDIA’s GPU architecture scheduled to take the reigns from Kepler in a couple of years. Unlike the Intel and AMD efforts, NVIDIA’s contract is for processor technology only, although the Echelon design also specified an exascale-capable memory subsystem.

While the DOE spread its bets around for the FastForward processor- and memory-based research, there was only one storage subcontract awarded. That went to Whamcloud, who in conjunction with EMC, Cray and HDF Group, got the nod to provide the R&D work for storage and I/O.

The work specifies bringing object storage into the exascale realm, and will be based on the Lustre parallel file system technology. As a result, any development in this area will be open sourced and be available to the Lustre community.

Although the FastForward contracts limit their scope to specific exascale components, rather than complete systems, the research won’t be performed in a complete vacuum. The vendors are expected to work in conjunction with the DOE’s exascale co-design centers, a group that encapsulates various proxy applications, algorithms, and programming models important to the agency. The idea is to align the vendor R&D designs with the DOE’s application needs and expectations, the implication being that these are general enough to apply to a wide range of exascale codes both inside and outside the Energy Department.

All the FastForward contracts have a two-year lifetime, so are slated to expire in 2014. The follow-on DOE work to design and build entire exascale supercomputers are dependent on future budgets. Assuming the feds comes through with the funding, that effort is expected to cost hundreds of millions of dollars over the next several years.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Pfizer HPC Engineer Aims to Automate Software Stack Testing

January 17, 2019

Seeking to reign in the tediousness of manual software testing, Pfizer HPC Engineer Shahzeb Siddiqui is developing an open source software tool called buildtest, aimed at automating software stack testing by providing the community with a central repository of tests for common HPC apps and the ability to automate execution of testing. Read more…

By Tiffany Trader

Senegal Prepares to Take Delivery of Atos Supercomputer

January 16, 2019

In just a few months time, Senegal will be operating the second largest HPC system in sub-Saharan Africa. The Minister of Higher Education, Research and Innovation Mary Teuw Niane made the announcement on Monday (Jan. 14 Read more…

By Tiffany Trader

Google Cloud Platform Extends GPU Instance Options

January 16, 2019

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three big public cloud vendors has by turn touted the latest and Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE Systems With Intel Omni-Path: Architected for Value and Accessible High-Performance Computing

Today’s high-performance computing (HPC) and artificial intelligence (AI) users value high performing clusters. And the higher the performance that their system can deliver, the better. Read more…

IBM Accelerated Insights

Resource Management in the Age of Artificial Intelligence

New challenges demand fresh approaches

Fueled by GPUs, big data, and rapid advances in software, the AI revolution is upon us. Read more…

STAC Floats ML Benchmark for Financial Services Workloads

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchmark or suite of benchmarking tools to compare the performanc Read more…

By John Russell

Google Cloud Platform Extends GPU Instance Options

January 16, 2019

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three Read more…

By Tiffany Trader

STAC Floats ML Benchmark for Financial Services Workloads

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchm Read more…

By John Russell

A Big Data Journey While Seeking to Catalog our Universe

January 16, 2019

It turns out, astronomers have lots of photos of the sky but seek knowledge about what the photos mean. Sound familiar? Big data problems are often characterize Read more…

By James Reinders

Intel Bets Big on 2-Track Quantum Strategy

January 15, 2019

Quantum computing has lived so long in the future it’s taken on a futuristic life of its own, with a Gartner-style hype cycle that includes triggers of innovation, inflated expectations and – though a useful quantum system is still years away – anticipatory troughs of disillusionment. Read more…

By Doug Black

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM’s New Global Weather Forecasting System Runs on GPUs

January 9, 2019

Anyone who has checked a forecast to decide whether or not to pack an umbrella knows that weather prediction can be a mercurial endeavor. It is a Herculean task: the constant modeling of incredibly complex systems to a high degree of accuracy at a local level within very short spans of time. Read more…

By Oliver Peckham

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Microsoft to Buy Mellanox?

December 20, 2018

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…

By Doug Black

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This