DOE Shares INCITE with Industry

By Steve Conway

August 25, 2006

In mid-2005, the Department of Energy adopted the Council on Competitiveness' recommendation to expand the INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program to include industry, along with government and university research. In this exclusive HPCwire interview, we first talk about the expanded INCITE program with Doug Kothe, director of science for Oak Ridge National Laboratory's National Center for Computational Sciences. We then turn to Jeff Candy, principal scientist in the Energy Group of General Atomics, one of the initial companies participating in the expanded INCITE program. Jeff is a co-investigator for the General Atomics research project being carried out at ORNL under INCITE.

HPCwire: When did ORNL get involved in the INCITE program?

Kothe: The DOE INCITE program will be in its fourth year starting January 2007, and ORNL has been involved since January 2006, when our Cray XT3 and X1E systems became generally available. Involvement in INCITE is part of our responsibility as a DOE Leadership Computing Facility. The program gets us involved with a new set of researchers who are doing very interesting, very challenging things across a broad spectrum of disciplines. In part because of the findings and initiatives of the Council on Competitiveness, program eligibility was expanded to include industrial firms as well as government and academic researchers.

HPCwire: Three of the four 2006 industrial participants in INCITE are doing their work at ORNL. Why is that?

Kothe: General Atomics, Boeing and DreamWorks Animation are engaged in both fundamental and applied computer and computational science, and ORNL is the nation's largest computing resource for big, open science. The DOE Office of Science wants to help bring the country forward so we can be number one in all areas of science, including science as applied in industry. All INCITE proposals fit within the DOE Office of Science mission.

HPCwire: How much time does ORNL reserve for its INCITE partners?

Kothe: That's determined by the Office of Science, with input from facilities like ours as well as peer scientists in each domain. This year, roughly three million hours on our “Jaguar” Cray XT3 system and another 600,000 hours on our “Phoenix” Cray X1E system are being allocated to five INCITE projects. For 2007, 80 percent of the cycles on the Cray leadership-class computers at ORNL will be allocated through the INCITE program.

HPCwire: Who's eligible to apply for time under the INCITE program?

Kothe: It's open to all scientific researchers and research organizations, whether they're from government, academia or industry. Researchers don't need to have current DOE sponsorship. The projects have to be computationally intensive and large scale. They have to have the potential to make high-impact scientific advances through the use of a large allocation of computer time, resources, and data storage. Proposals can be for one to three years in length.

HPCwire: How does the process work for selecting INCITE winners?

Kothe: A panel of subject-matter experts reviews each proposal. The panel could be assembled for a specific proposal, or for a group of proposals in the same science domain. We perform a technical readiness assessment on the proposals prior to assembly of the panels to ensure that the projects have the potential to effectively utilize the leadership-class resources at ORNL.

HPCwire: What does ORNL itself look for in the INCITE proposals?

Kothe: Many of the same things as the Office of Science. The key thing is that the science has to be ready for a leadership-class system like ours. They have to be ready to use a large fraction or the entire machine at one time. There also has to be a strong potential for leading to a major discovery. Scientific merit is the number one consideration, and this is determined by a peer-reviewed panel of experts. Next comes technical readiness; they need to have the simulation tools built to exploit our systems at large scale.

When I say there has to be good potential for breakthrough science, I mean the coming together of their simulation tools and our machine in a manner that leads to a greater chance of a breakthrough. The Office of Science and ORNL need to see the path for achieving new understanding and new results. It could be a “planned discovery,” where you know that applying a certain amount of supercomputing power is all you need to solve the problem (for example, getting to smaller length and/or time scales), or an “unplanned discovery,” whereby a “light bulb moment” occurs that was totally unexpected. In either case, the DOE wants to leverage these expensive computing resources, so they need to make sure that people who get INCITE awards have great potential and are ready to exploit the machines. In some cases, we might tell people, and help people, to further parallelize their code and try again next year. To accomplish that, they might get resources at centers that are more geared toward capacity computing.

HPCwire: Do INCITE grantees have to re-apply each year?

Kothe: Yes. If they're making good progress, they have strong chances for repeating, but they will have new proposals to compete with. If your simulation tool is more of an unknown and you don't have data to draw on from the prior year, that can be a disadvantage, but it's not automatic for proposals to be renewed, either. We ask very specific questions of renewal proposals. We also ask about challenges in using our system so that we can make changes to increase the productivity of the computational scientists.

HPCwire: How does ORNL work with companies under INCITE? What are ORNL's responsibilities, beyond just making cycles available? What are the rules for the INCITE partners?

Kothe: As a DOE Leadership-Class Facility, we try to be vertically integrated. We can't just be a cycle shop. For example, we have a Scientific Computing Group in our National Center for Computational Sciences who are accomplished Ph.D. computational scientists and work closely with the INCITE project teams to get their tools ported and optimized, to help with data analysis, algorithm improvements, and so on. These Scientific Computing Group people, which we call “liaisons,” partner scientifically with each project and remain in day-to-day contact with the project personnel. We have another group that helps solve technical problems. This is our User Assistance and Outreach Group. They set up accounts, field questions about moving data, debugging, etc.

We have two other equally important groups. The Technology Integration Group develops the unifying infrastructure that supports our Leadership systems, things like archival storage, file systems, networks, cyber security, and kernel and system programming. The HPC Operations Group provides around-the-clock, 24-by-7 operations coverage of the Leadership computing and storage systems, systems administration, configuration management, and cyber security. We are also very fortunate to have a Cray Supercomputing Center of Excellence at ORNL, which provides system expertise to facilitate breakthrough science on Cray architectures, application targeting, porting, optimization, library development, tool development, and training.

HPCwire: From ORNL's standpoint, how is the INCITE program working?

Kothe: It's working really well, although there's always room for improvement. The companies who are involved in the INCITE program have some of the best simulation tools in the world. Through INCITE, we at ORNL learn more about the basic technologies in these simulation tools, and this knowledge allows us to help others, especially with porting, tuning, algorithms, etc. Of course, we protect the companies' data and advise others in a pre-competitive, non-proprietary way. My point is that the information flow is bi-directional and also goes across projects. For example, General Atomics has very nice simulation codes, and so does DOE. Each party can learn from what the other has. Any time we get big codes, they present a new set of challenges that we learn from. They tax our compilers and our other tools. In the end, this helps make us a more robust, stable facility. We have a broader range of applications and more turnover than without INCITE, and this is good.

HPCwire: Are there mechanisms for getting feedback on the program to DOE from sites like ORNL, and from the INCITE partners?

Kothe: The processes are still evolving, but today we ask for quarterly updates from all the projects. We ask them to share their results, tell us about any problems they've been having and what they foresee for the next three months in the way of usage. This allows us to make mid-course corrections and to address problems fairly quickly. We also have yearly user meetings and are starting to have more regular phone conferences. At ORNL, we have hundreds rather than thousands of users, and only dozens of projects, so can work one-on-one with almost everyone.

HPCwire: In case people are interested, what's the timing for the next round of INCITE awards?

Kothe: The call from the DOE Office of Science went out at the end of June [http://hpc.science.doe.gov/]. People have until September 15 to respond. That may sound like a short time, but the proposals aren't onerous.

HPCwire: What else would ORNL like people to know about the INCITE program?

Kothe: Just that if you're doing big computational science today, INCITE gives you a great opportunity to use a huge resource like ours. We believe that supercomputing will enable another major revolution in science. Having access to a resource like our National Center for Computational Sciences is a tremendous advantage for people with powerful ideas. We hope they'll take advantage of what we have to offer at ORNL.

HPCwire: Thanks, Doug. I'm going to turn now to Jeff Candy. Jeff, can you summarize the work General Atomics is doing in the INCITE program?

Candy: It's related to the long-term goal of turning fusion, the process that occurs in the sun and other stars, into an almost limitless, clean source of energy here on earth. To do this, we have to understand and learn to control plasma, the super-heated gaseous matter that serves as the fuel for fusion reactions. We have a DOE contract to study the behavior of plasma inside a doughnut-shaped chamber called a tokamak that will be at the heart of the first prototype fusion reactor. We use General Atomics' GYRO code, which solves the gyrokinetic Maxwell equations. The problem's more complex than if it were just a fluid, because plasma is charged and has to be confined within a magnetic field. The particles orbit around the magnetic field in complicated ways. There are big experimental sites in the UK, Japan, Germany and San Diego. General Atomics does simulations based upon data coming from all these experimental sites, especially San Diego.

HPCwire: To what extent is your work at ORNL related to the multi-national ITER program?

Candy: Our work is becoming progressively more focused on the ITER program. In terms of turbulence studies, General Atomics is the only place doing ITER-specific simulations. We've done extensive ITER modeling using a transport model (GLF23) derived from GYRO data. We have also done fundamental work on the turbulent transport of alpha particles, which are a product of the fusion reaction and can show significant anomalous behavior. In fact, we've submitted a paper addressing this issue.

To really model plasma inside the ITER tokamak, you also need to get into the engineering sphere. Gyrokinetic simulation tells you the turbulent fluxes for given temperature and density profiles, but you also need to apply other physics to come to a steady state. We're trying to plug GYRO into a massive modeling loop which includes the other relevant physics required to evolve the temperature and density. General Atomics has a SciDAC-2 proposal to develop a gyrokinetic feedback scheme relevant to ITER and other reactor-sized plasmas. This is the type of code you need to make real performance predictions for the ITER reactor. It's designed to give very accurate first-principles answers.

HPCwire: How long have you been doing work at ORNL?

Candy: My connection with ORNL started in the summer of 2002, when ORNL's IBM Power4-based “Cheetah” supercomputer came on line. Through a SciDAC contract, General Atomics started doing simulations on the IBM system. When the Cray X1 arrived in 2003, ORNL's Mark Fahey ported GYRO to it and had a lot of success without a lot of suffering even at this early X1 stage. A lot of the credit for that goes to Mark. GYRO's very portable, so ORNL likes to test new machines with it. They ported GYRO to the Cray XT3 very early on, and it revealed some XT3 ramp-up issues. GYRO really stresses architectures.

HPCwire: How much work will you be doing on the ORNL systems?

Candy: We got about 400,000 hours on the X1E on top of our base National Leadership Computing Facility allocation of 440,000, so in total we have almost a million hours at ORNL.

HPCwire: How did you first learn about the INCITE program?

Candy: We were previously aware of the Department of Energy's INCITE program. With Mark Fahey's encouragement, we submitted INCITE proposals and Mark's advice really helped.

HPCwire: Why do your work at ORNL?

Candy: We had been using NERSC, mainly. As a point of history, General Atomics people founded the San Diego Supercomputer Center in the mid 1980s, and GYRO was first run there. By 2002, the only NERSC machine useful for what we were doing was “Seaborg.” We weren't getting much throughput at NERSC and heard about the ORNL “Cheetah” machine, and we got tremendous throughput from that. The availability of the substantially more powerful Cray machines made ORNL even more attractive for our work. ORNL has an extremely receptive, helpful staff that jumps immediately on problems. We still use NERSC, but markedly less. The ORNL experts are great, because our users can go to them with questions. I deal with ORNL computer scientists, not physicists.

HPCwire: Do you use only the Cray X1E or the Cray XT3, too?

Candy: The X1E is where we have our account, and it's a great system. We've been able to scale GYRO to the full X1E. It also performs well on the XT3, but we don't have an account on that machine. Progress is tremendous now. We're doing a study that for the first time couples electron and ion-scale turbulence. Normally, people do much smaller, electron-scale simulations. The simulation we're running requires almost the entire X1E and takes five to six days to run one iteration. It's a great experience.

HPCwire: How do you feel about the INCITE program?

Candy: INCITE is absolutely crucial for our current research program. It's a great program.

HPCwire: Anything to add, Jeff?

Candy: I just want to repeat that Mark Fahey of ORNL has been a crucial person in this effort, especially for code optimization. He sees things we sometimes don't. I have nothing but great things to say about him.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire