Collaborative Supercomputing

By Michael Feldman

September 11, 2008

The big news in the science community this week was the kickoff of CERN’s Large Hadron Collider (LHC), The $10 billion atom smasher that sent its first proton beams through the device’s 17-mile underground tunnel in Switzerland and France. These initial tests were the culmination of 15 years of planning and development that brought together 80 countries and thousands of individual researchers around the world. While it remains to be seen what scientific discoveries will eventually result from the LHC experiments, there is no doubt it represents the biggest and most ambitious global science project today.

Today, though, I’m going to talk about another set of science community partnerships, although these have received much less attention from the press. For the past seven years, the U.S Department of Energy’s (DOE) Office of Science has opened the doors to its terascale supercomputers and changed the way many U.S. scientists are doing cutting-edge research. Through the SciDAC and INCITE programs, the Office of Science has expanded the high-end computing capabilities of the agency, while spreading supercomputing talent and hardware resources across the broader research community.

In most cases, these collaborations were confined to U.S.-based science, but in others cases, the DOE partnered with researchers from around the world. In fact, the DOE (along with the NSF) invested $531 million in the aforementioned LHC project and helped design and build the ATLAS and CMS detectors through two of its labs — Brookhaven in New York and Fermilab in Illinois.

Since U.S. government agencies compete for budget dollars and attention, the natural reaction is for each agency to guard its resources. So in many ways, the opening of the DOE’s Office of Science was an unlikely path. Perhaps even more unlikely is the individual who led the charge: Dr. Raymond Orbach, the director of the Office of Science. In 2002 Orbach was appointed by the Bush administration — a group not exactly known for its collaborative style of governance, much less its love for open science, or, in some cases, science at all. But Orbach proved to be an true leader in promoting partnerships with other agencies, universities and even industrial organizations.

The INCITE program, in particular, changed the nature of computing at the DOE. Up until 2002, agency computers were primarily used for DOE grantees. At that point, Orbach devised the INCITE program, which opened up DOE supercomputing resources to the science community. The program was designed so that supercomputing cycles were allocated on a competitive basis, in which only the most capable organizations and the most interesting problems were given time on the machines. In a nutshell, the idea was to make the best hardware available for the best science. “It sounds completely reasonable now, but I can tell you back in 2002, there was lot of speculation and complaints that I was opening up our computers to the world,” admits Orbach.

In each succeeding year the program expanded its allocations. In 2008, 265 million CPU hours on DOE machines were awarded to 55 projects, eight of which are from industry, 17 from universities, and 20 from DOE labs as well as other public, private and international researchers.

As it turns out, INCITE will also provide the structure for computer allocations announced on Monday for a new partnership with NOAA. In this case, the Office of Science will make available more than 10 million hours of computing time for NOAA to develop and refine advanced climate change models. The work will be performed on the latest computing hardware at three DOE labs: Argonne, Oak Ridge, and NERSC at Lawrence Berkeley.

Although the DOE has worked with the climate community before, it’s mostly been done via lower-level collaborations between PIs across agencies. As Pete Beckman, Argonne’s Interim Director, puts it: “This really says we want to move together in a strategic way. And that’s very important.” He sees the new collaboration as a way to move the national climate and weather modeling work forward under a more unified structure. At Argonne, they’ve already begun porting some of the NOAA codes to run on their 557-teraflop Blue Gene/P system. Under this new framework, Beckman believes over the next couple of years we should be able to “dramatically improve our capabilities for weather and climate prediction.”

The collaboration between the DOE and NOAA has been formalized in a memorandum of understanding (MOU), but is being done under the general framework of the Climate Change Science Program (CCSP), which was instituted in July 2003. The program brought together not just NOAA and the DOE, but also NCAR and ten other federal agencies. The rationale was to bring some cohesion to the climate codes being developed across the United States. “At the time the CCSP was promulgated, the United States was behind in high-end computation,” says Orbach. “The Japanese Earth Simulator was the fastest machine in the world and we didn’t have any open science capability to match it.”

In 2004, the U.S. took back the top spot in supercomputing with BlueGene/L and has maintained the lead ever since. But not all U.S agencies were endowed with leading-edge supercomputers or the software talent they attracted. The NOAA, which is administered by the Department of Commerce, has much less HPC capability than federal agencies like the Department of Defense and the DOE. Currently, the most powerful system owned by NOAA is a relatively modest 25-teraflop IBM Power6-based system. “In my view, this MOU is a recognition of where each of the agencies is at this point in time, and frankly a rationalization of their capabilities and talents,” explained Orbach.

In addition to the retaking of supercomputing leadership in 2004, the climate modeling community also expanded. Through the Atmospheric Radiation Measurement (ARM) and the SciDAC program (a program begun in 2001 that brought together top researchers in a variety of scientific disciplines), the Office of Science has developed deep expertise in both climate measurement and global climate change modeling. According the Orbach, the accumulation of this expertise over the last five years is at least as important as the new hardware in moving the climate models forward.

Under the new partnership, the NOAA codes will become open to the community, and Orbach is hoping that the software will be optimized with the help of SciDAC researchers. NOAA currently uses one of its home-grown Geophysical Fluid Dynamics Laboratory (GFDL) codes to predict hurricanes, but it is currently limited to a grid model based on 9 km granularity. Orbach says to get really accurate models, you need to get down to the 1 km level. By optimizing the software, Orbach thinks you can pick up a couple orders of magnitude in “effective speed,” and notes that SciDAC has made similar improvements in other codes they’ve worked on.

NOAA’s GFDL climate codes and the DOE-NCAR Community Climate System Model (CCSM) codes are the two national major climate models developed in the U.S. The CCSM model is already being run extensively on DOE supercomputers, while porting of the GFDL code is imminent. Whether the codes become integrated at some point or continue to diverge remains a question, although Orbach has his own take on this.

“Ultimately, I would like to see the United States have a single code,” he says. “That’s what the Europeans have agreed on and they have many more partners than we have. As a consequence, they’ve been able to develop a common code for the whole European community and have made really wonderful advancements. This multiplicity of codes — I don’t know how it’s going to shake out.”

Looking further out, Orbach would like to see the climate change models incorporate human factors. Today the climate codes only take into account the physical system — the oceans, the atmosphere, land masses, etc. But human behavior can be modeled as well, and since people will necessarily change their behavior in response to climate policy decisions — for example, energy pricing, new energy sources, and conservation measures — that feedback must be part of the climate model to produce an accurate prediction. More importantly, the policy makers themselves would need access to those models so they could run different scenarios for policies they are considering.

Integrated models like this are already being talked about in anticipation of multi-petaflop and eventually exaflop DOE machines. The agency has already donated a million CPU hours to the National Endowment for the Humanities (NEH) to begin to generate interest in this type of application. Also, workshops at Berkeley have been set up to teach social scientists how to make use of these leading edge supercomputers.

“We’re not there yet,” Orbach told me. “I don’t know how fast our computers are going to have to be or how good our codes are going to have to be, but you can see where we’re going.”

Orbach probably won’t be around to usher in these next-generation applications though. As a political appointee, his tenure at the DOE ends in four months, when Bush leaves office. He assumes that whoever prevails in the presidential election will want their own person to head the Office of Science. “I disappear at noon on January 20th,” notes Orbach.

Theoretically, his INCITE program could be ditched or scaled back by new leadership, but that’s highly unlikely. The program is already too popular in the science and technology community. What’s more likely is that the next director will build on the foundations that Orbach has built over this 7-year tenure. And with public awareness of climate change and energy policy at an all-time high, the DOE may well be the most important agency of the U.S. government in the next administration.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This