Much Ado About Petascale

By Michael Feldman

August 10, 2007

On Wednesday, the National Science Foundation (NSF) announced the award recipients for two highly coveted petascale supercomputers. The NSF selected the University of Illinois at Urbana-Champaign (UIUC) for the “Track 1” grant, while the University of Tennessee was selected for “Track 2.” The Track 1 system represents a multi-petaflop supercomputer; Track 2 represents a smaller system that’s expected to come in at just shy of a petaflop. The National Science Board met on Monday to approve the funding for the two supercomputers.

Specific information about the machines was not revealed and will not be forthcoming until the award process is completed — probably sometime in the fall.

UIUC is slated to receive $208 million over four and a half years to acquire and deploy the multi-petaflop machine, code named “Blue Waters.” It will be operated by the National Center for Supercomputing Applications (NCSA) and its academic and industry partners in the Great Lakes Consortium for Petascale Computation. The system is expected to go online in 2011.

The sub-petaflop will be installed at the University of Tennessee at Knoxville Joint Institute for Computational Science. The $65 million, five-year project will include partners at Oak Ridge National Laboratory (ORNL), the Texas Advanced Computing Center (TACC), and the National Center for Atmospheric Research (NCAR).

Here’s where it gets interesting. Most of the information stated above was already known last week when an NSF staffer accidentally posted the names of the winning proposals on an NSF website. Before the information could be removed, the supercomputing community had gotten wind of the decisions. And, as you might imagine, a lot of people on the losing end of the awards are already questioning the selections.

One could pass this off as sour grapes by the losers, but I have a sense something else is going on here. According to my sources, people have been concerned about the NSF petascale awards process almost from the start. In a New York Times piece on the NSF grants earlier in the week, Lawrence Berkeley National Laboratory’s Horst Simon was quoted as saying:

“Several government supercomputing scientists said they were concerned that the decision might raise questions about impartiality and political influence. The process needs to be above all suspicion. It’s in the interest of the national community that there is not even a cloud of suspicion, and there already is one.”

Although nobody was willing to go on the record with me, I learned some interesting tidbits from a few individuals who were close to the proposals. Since there is no way to confirm any of this, take all of the following with a grain of salt.

To be begin with, the Track 1 supercomputer bid by UIUC appears to be an IBM PERCS system — the same system being developed for DARPA’s High Productivity Computing Systems (HPCS) program. The Track 2 supercomputer bid by the University of Tennessee appears to be a Baker-class Cray machine, essentially a precursor to the company’s HPCS Cascade architecture. I’ll get to why this may be significant in just a moment.

Putting aside the Track 2 award, let’s look at the Track 1 proposals. According to my sources, there were four bids:

1. Carnegie Mellon University/Pittsburgh Supercomputing Center (plus partners?):  This group bid a system based on Intel’s future terascale processors. Intel has demonstrated an 80-core processor prototype that has achieved a teraflop. I’m not sure of the peak performance for the proposed system; it may be as high as 40 petaflops.

2. University of California, San Diego/San Diego Supercomputing Center along with Lawrence Berkeley National Laboratory and others:  The “California” bid was a million-core IBM Blue Gene/Q system, reputed to be in the 20-petaflop range. The host site is rumored to be Lawrence Livermore National Laboratory.

3. University of Tennessee/ORNL (plus others?):  This group proposed a 20-petaflop Cray machine. If true, we can assume it was a Cascade machine (Marble- or Granite-class) .

4. University of Illinois at Urbana-Champaign/NCSA along with the Great Lakes Consortium for Petascale Computation:  They proposed and won with an IBM PERCS. It’s thought to be a 10-petaflop system.

As it turned out, at 10 petaflops the winning bid was the least powerful machine in the bunch, peak performance-wise. Even at that, if the system goes live in 2011 as planned, it may very well be the most powerful supercomputer in the world. Keep in mind though that the Japanese are also planning to launch a 10-petaflop machine in the same timeframe.

There may be a number of reasons why the NSF made the selection in favor of PERCS, and I sure would be interested to know what they are. The system is almost certainly not the best in the group in terms of performance-per-watt. I would guess both the Blue Gene/Q and the Intel Wonder machine would be more energy-efficient. Since we don’t know enough about software support for any of these multi-petaflop systems, it’s difficult to compare them on their ability to field big science applications.

One other unusual aspect to the Track 1 selection is that, as HPC centers go, UIUC/NCSA doesn’t have an established reputation for cutting-edge supers. It’s been content to do its work with a number of smaller HPC systems. The PERCS machine is supposed to be housed at UIUC, but no facility yet exists that can accommodate it. We have to assume that all this is going to change.

In defense of the selection, NCSA is one of the five big regional supercomputing centers in the United States and could conceivably grow into this role. The PERCS machine is a pretty safe bet, technology-wise, since DARPA HPCS is helping to fund this effort and investing in IBM is usually a conservative strategy. Certainly, IBM is enthusiastic about the PERCS architecture and especially the POWER7 processor that it is to be based on.

Perhaps the most unfortunate aspect to this process is that a lot of questions will remain unanswered. This is a result of the rather opaque nature of the NSF review process. To be sure, the review criteria are spelled out in Section VI of the NSF Track 1 solicitation, but the actual process is not. Who are the reviewers and how did they qualitatively balance the different criteria? One assumes that the reviewers composed responses to each proposal, but only the awarded proposals go into the public record, and I’m not sure if the feedback from the NSF will be included.

There has been some talk that there were too few qualified proposal reviewers. The argument was that because most of the HPC brain trust had a vested interest in one of the four proposals, there were no qualified reviewers without conflict of interest baggage. I’m not sure if I buy that. The HPC population seems too large and spread out for that to be an issue. Nonetheless, this seems to be an issue with some in the community.

There is also speculation that the review group was influenced by one or more individuals who were (or are) involved in the HPCS program. If true, this could have unfairly steered the selection toward the HPCS systems from Cray and IBM, instead of more speculative architectures. There’s no way to tell if this occurred, but the results suggest this is a possibility.

I suppose it could be argued that what’s good enough for DARPA is good enough for the NSF. But keep in mind that the HPCS mission is to create productive and commercially viable supercomputing systems for a range of government and industrial applications; the NSF petascale goal is to find big systems to do big science. Obviously, there’s some overlap here, but it’s reasonable to imagine that these two missions could lead to different computing platforms.

For its part, the NSF sticks by its reviewers and its selection process. Leslie Fink, representing NSF’s Office of Legislative and Public Affairs sent me the following response to my inquiry about the review process:

“Identities of the reviewers are … confidential,” said Fink. “NSF has a strict conflict of interest policy, and heroic efforts are made to ensure panel members are not in conflict with the proposers. Basically, what happens in review stays in review.”

I guess the big frustration here is that because of the lack of transparency, much of the story will remain hidden. Short of a Congressional inquiry, the NSF isn’t obligated to provide the rationale for awarding these grants, and the losing bids will never be made public. It’s possible that the reviewers did manage to find the best way to spend the taxpayer’s money. I hope so. But since the process takes place behind closed doors, we’ll never know.

——

As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at [email protected].

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire