DOE Awards 18 Million Hours of Supercomputing Time

By Nicole Hemsoth

February 3, 2006

Secretary of Energy Samuel W. Bodman has announced that DOE's Office of Science has awarded a total of 18.2 million hours of computing time on some of the world's most powerful supercomputers to help researchers in government labs, universities, and industry working on projects ranging from designing more efficient engines to better understanding Parkinson's disease.
 
The allocations of computing time are made under DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, now in its third year of providing resources to computationally intensive research projects in the national interest. In its first two years, INCITE has enabled scientists to create unprecedented simulations and gain greater insight into problems in chemistry, combustion, astrophysics, genetics and turbulence.
 
“Through the INCITE program, the department's scientific computing resources will continue to allow researchers to make discoveries that might otherwise not be possible,” Energy Secretary Bodman said in announcing the latest INCITE grants. “We live in an exciting time as researchers make advances that potentially can help us all.”
 
Projects to be supported by INCITE in the coming year include:
 
  * the design of more efficient aircraft and engines

  * learning more about the molecular basis of Parkinson's Disease

  * simulations which will help advance fusion as a future energy source

  * improved understanding of human and ecological processes affecting climate change

  * simulations to learn about how cell disruptions allow diseases and infections to occur

  * development of stronger advanced materials and better understanding of material properties

  * improved simulations of molecular collisions which can be used to study a wide range of scientific problems

  * development of computing tools to improve computer visualizations and animations

  * improved understanding of water and how light affects water in biological systems

  * computing the structure of proteins at the atomic level

  * an increased understanding of the dark energy and dark matter thought to make up more than 9/10ths of our universe

  * simulations of particle accelerators used in scientific research

For the first time in the three-year history of INCITE, proposals from private sector researchers were specifically encouraged. In return, much of the resulting knowledge will be made publicly available. The program was also expanded from a single supercomputing facility at Lawrence Berkeley National Laboratory to five supercomputers at four DOE national laboratories.  The laboratories participating are Argonne National Laboratory, Lawrence Berkeley National Laboratory, Oak Ridge National Laboratory and Pacific Northwest National Laboratory.  This allowed DOE to increase the number of grants to 15, up from three in each of the past two years.
 
Four of the proposals receiving awards were from industry:  Boeing Co., Dreamworks Animation, General Atomics Co., and Pratt Whitney.  Academic, research institutions and other companies to receive computing time are:   Auburn University; California Institute of Technology; Fisk University; Harvard University; Howard Hughes Medical Institute; Rollins College; Tech-X Corp.; University of Alaska, Fairbanks; University of California, Berkeley; University of California, Davis; University of California, San Diego; University of Colorado; University of Strathclyde; and the University of Washington.
 
Researchers at DOE's Lawrence Berkeley, Lawrence Livermore, Los Alamos and Oak Ridge National Laboratories will also receive computing time. 
 
In response to the May 2005 call for INCITE proposals, 43 computationally intensive, large-scale research projects were submitted requesting over 95 million processor hours. The proposals covered 11 scientific disciplines: accelerator physics, astrophysics, chemical sciences, climate research, computer science, engineering physics, environmental science, fusion energy, life sciences, materials science and nuclear physics.
 
Sixty percent of the proposals received were from U. S. universities and 41 percent were supported by research agencies other than the Department of Energy.
 
In the first year of INCITE at NERSC, scientists from the University of Chicago and Argonne National Lab studying supernovae were able to model the first-ever full-star simulations of stellar explosions in three dimensions. Another group from UC Berkeley and Lawrence Berkeley National Lab used their INCITE allocation to study key aspects of photosynthesis to better understand this sustainable energy source. A third group from Georgia Tech was able to create simulations of turbulence at a scale of unsurpassed detail, which can be used to improve engineering processes.
 
Currently, three research groups are making significant use of their allocations. One University of Chicago group is seeking to increase our understanding of accretion in the cosmos through simulation and experiment by modeling an experiment being done at the Princeton Plasma Physics Lab to understand magneto-rotational instability. Another group, from Sandia Livermore, is creating direct numerical simulations of turbulent non-premixed flame that will serve as a benchmark for future theory and experiment. The third group, from the University of Washington, is using the IBM supercomputer at NERSC to catalog dynamical shapes of proteins by systematically unfolding them.
 
“I believe that the overwhelming response to the INCITE program reflects both the computational leadership of the Department of Energy and the widespread recognition of computational science as a tool for scientific discovery,” said Dr. Raymond L. Orbach, Director of DOE's Office of Science. “Fortunately, the Office of Science has facilities and expertise to help meet this demand.”
 
Processor-hours refer to how time is allocated on a supercomputer. A project receiving 50,000 hours could run on 50 processors for 1,000 hours, or about 42 days. Running the same project on a single-processor desktop computer would take almost six years. Projects to be supported by INCITE in 2006 range from 16,000 hours for a pilot study of Parkinson's disease to 5 million hours to study protein folding. Six of the projects received awards of 1 million or more processor-hours.
 
“Based on the scientific community's response to INCITE, along with the availability of additional supercomputers, we are now able to take this groundbreaking computational science program to a new level,” said Energy Secretary Bodman. “Previous INCITE projects have addressed problems ranging in size from the photosynthesis chemistry in plant molecules to supernova explosions, from turbulent flows to the formation of stars, from protein folding to combustion studies. What they all have in common is that access to DOE's scientific computing resources has allowed researchers to make advances that would have otherwise taken much longer or not been possible.”

DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the nation and ensures U.S. world leadership across a broad range of scientific disciplines.  For more information about the Office of Science or for descriptions of the INCITE projects, go to www.science.doe.gov.
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire