DOE Shares INCITE with Industry

By Steve Conway

August 25, 2006

In mid-2005, the Department of Energy adopted the Council on Competitiveness' recommendation to expand the INCITE (Innovative and Novel Computational Impact on Theory and Experiment) program to include industry, along with government and university research. In this exclusive HPCwire interview, we first talk about the expanded INCITE program with Doug Kothe, director of science for Oak Ridge National Laboratory's National Center for Computational Sciences. We then turn to Jeff Candy, principal scientist in the Energy Group of General Atomics, one of the initial companies participating in the expanded INCITE program. Jeff is a co-investigator for the General Atomics research project being carried out at ORNL under INCITE.

HPCwire: When did ORNL get involved in the INCITE program?

Kothe: The DOE INCITE program will be in its fourth year starting January 2007, and ORNL has been involved since January 2006, when our Cray XT3 and X1E systems became generally available. Involvement in INCITE is part of our responsibility as a DOE Leadership Computing Facility. The program gets us involved with a new set of researchers who are doing very interesting, very challenging things across a broad spectrum of disciplines. In part because of the findings and initiatives of the Council on Competitiveness, program eligibility was expanded to include industrial firms as well as government and academic researchers.

HPCwire: Three of the four 2006 industrial participants in INCITE are doing their work at ORNL. Why is that?

Kothe: General Atomics, Boeing and DreamWorks Animation are engaged in both fundamental and applied computer and computational science, and ORNL is the nation's largest computing resource for big, open science. The DOE Office of Science wants to help bring the country forward so we can be number one in all areas of science, including science as applied in industry. All INCITE proposals fit within the DOE Office of Science mission.

HPCwire: How much time does ORNL reserve for its INCITE partners?

Kothe: That's determined by the Office of Science, with input from facilities like ours as well as peer scientists in each domain. This year, roughly three million hours on our “Jaguar” Cray XT3 system and another 600,000 hours on our “Phoenix” Cray X1E system are being allocated to five INCITE projects. For 2007, 80 percent of the cycles on the Cray leadership-class computers at ORNL will be allocated through the INCITE program.

HPCwire: Who's eligible to apply for time under the INCITE program?

Kothe: It's open to all scientific researchers and research organizations, whether they're from government, academia or industry. Researchers don't need to have current DOE sponsorship. The projects have to be computationally intensive and large scale. They have to have the potential to make high-impact scientific advances through the use of a large allocation of computer time, resources, and data storage. Proposals can be for one to three years in length.

HPCwire: How does the process work for selecting INCITE winners?

Kothe: A panel of subject-matter experts reviews each proposal. The panel could be assembled for a specific proposal, or for a group of proposals in the same science domain. We perform a technical readiness assessment on the proposals prior to assembly of the panels to ensure that the projects have the potential to effectively utilize the leadership-class resources at ORNL.

HPCwire: What does ORNL itself look for in the INCITE proposals?

Kothe: Many of the same things as the Office of Science. The key thing is that the science has to be ready for a leadership-class system like ours. They have to be ready to use a large fraction or the entire machine at one time. There also has to be a strong potential for leading to a major discovery. Scientific merit is the number one consideration, and this is determined by a peer-reviewed panel of experts. Next comes technical readiness; they need to have the simulation tools built to exploit our systems at large scale.

When I say there has to be good potential for breakthrough science, I mean the coming together of their simulation tools and our machine in a manner that leads to a greater chance of a breakthrough. The Office of Science and ORNL need to see the path for achieving new understanding and new results. It could be a “planned discovery,” where you know that applying a certain amount of supercomputing power is all you need to solve the problem (for example, getting to smaller length and/or time scales), or an “unplanned discovery,” whereby a “light bulb moment” occurs that was totally unexpected. In either case, the DOE wants to leverage these expensive computing resources, so they need to make sure that people who get INCITE awards have great potential and are ready to exploit the machines. In some cases, we might tell people, and help people, to further parallelize their code and try again next year. To accomplish that, they might get resources at centers that are more geared toward capacity computing.

HPCwire: Do INCITE grantees have to re-apply each year?

Kothe: Yes. If they're making good progress, they have strong chances for repeating, but they will have new proposals to compete with. If your simulation tool is more of an unknown and you don't have data to draw on from the prior year, that can be a disadvantage, but it's not automatic for proposals to be renewed, either. We ask very specific questions of renewal proposals. We also ask about challenges in using our system so that we can make changes to increase the productivity of the computational scientists.

HPCwire: How does ORNL work with companies under INCITE? What are ORNL's responsibilities, beyond just making cycles available? What are the rules for the INCITE partners?

Kothe: As a DOE Leadership-Class Facility, we try to be vertically integrated. We can't just be a cycle shop. For example, we have a Scientific Computing Group in our National Center for Computational Sciences who are accomplished Ph.D. computational scientists and work closely with the INCITE project teams to get their tools ported and optimized, to help with data analysis, algorithm improvements, and so on. These Scientific Computing Group people, which we call “liaisons,” partner scientifically with each project and remain in day-to-day contact with the project personnel. We have another group that helps solve technical problems. This is our User Assistance and Outreach Group. They set up accounts, field questions about moving data, debugging, etc.

We have two other equally important groups. The Technology Integration Group develops the unifying infrastructure that supports our Leadership systems, things like archival storage, file systems, networks, cyber security, and kernel and system programming. The HPC Operations Group provides around-the-clock, 24-by-7 operations coverage of the Leadership computing and storage systems, systems administration, configuration management, and cyber security. We are also very fortunate to have a Cray Supercomputing Center of Excellence at ORNL, which provides system expertise to facilitate breakthrough science on Cray architectures, application targeting, porting, optimization, library development, tool development, and training.

HPCwire: From ORNL's standpoint, how is the INCITE program working?

Kothe: It's working really well, although there's always room for improvement. The companies who are involved in the INCITE program have some of the best simulation tools in the world. Through INCITE, we at ORNL learn more about the basic technologies in these simulation tools, and this knowledge allows us to help others, especially with porting, tuning, algorithms, etc. Of course, we protect the companies' data and advise others in a pre-competitive, non-proprietary way. My point is that the information flow is bi-directional and also goes across projects. For example, General Atomics has very nice simulation codes, and so does DOE. Each party can learn from what the other has. Any time we get big codes, they present a new set of challenges that we learn from. They tax our compilers and our other tools. In the end, this helps make us a more robust, stable facility. We have a broader range of applications and more turnover than without INCITE, and this is good.

HPCwire: Are there mechanisms for getting feedback on the program to DOE from sites like ORNL, and from the INCITE partners?

Kothe: The processes are still evolving, but today we ask for quarterly updates from all the projects. We ask them to share their results, tell us about any problems they've been having and what they foresee for the next three months in the way of usage. This allows us to make mid-course corrections and to address problems fairly quickly. We also have yearly user meetings and are starting to have more regular phone conferences. At ORNL, we have hundreds rather than thousands of users, and only dozens of projects, so can work one-on-one with almost everyone.

HPCwire: In case people are interested, what's the timing for the next round of INCITE awards?

Kothe: The call from the DOE Office of Science went out at the end of June [http://hpc.science.doe.gov/]. People have until September 15 to respond. That may sound like a short time, but the proposals aren't onerous.

HPCwire: What else would ORNL like people to know about the INCITE program?

Kothe: Just that if you're doing big computational science today, INCITE gives you a great opportunity to use a huge resource like ours. We believe that supercomputing will enable another major revolution in science. Having access to a resource like our National Center for Computational Sciences is a tremendous advantage for people with powerful ideas. We hope they'll take advantage of what we have to offer at ORNL.

HPCwire: Thanks, Doug. I'm going to turn now to Jeff Candy. Jeff, can you summarize the work General Atomics is doing in the INCITE program?

Candy: It's related to the long-term goal of turning fusion, the process that occurs in the sun and other stars, into an almost limitless, clean source of energy here on earth. To do this, we have to understand and learn to control plasma, the super-heated gaseous matter that serves as the fuel for fusion reactions. We have a DOE contract to study the behavior of plasma inside a doughnut-shaped chamber called a tokamak that will be at the heart of the first prototype fusion reactor. We use General Atomics' GYRO code, which solves the gyrokinetic Maxwell equations. The problem's more complex than if it were just a fluid, because plasma is charged and has to be confined within a magnetic field. The particles orbit around the magnetic field in complicated ways. There are big experimental sites in the UK, Japan, Germany and San Diego. General Atomics does simulations based upon data coming from all these experimental sites, especially San Diego.

HPCwire: To what extent is your work at ORNL related to the multi-national ITER program?

Candy: Our work is becoming progressively more focused on the ITER program. In terms of turbulence studies, General Atomics is the only place doing ITER-specific simulations. We've done extensive ITER modeling using a transport model (GLF23) derived from GYRO data. We have also done fundamental work on the turbulent transport of alpha particles, which are a product of the fusion reaction and can show significant anomalous behavior. In fact, we've submitted a paper addressing this issue.

To really model plasma inside the ITER tokamak, you also need to get into the engineering sphere. Gyrokinetic simulation tells you the turbulent fluxes for given temperature and density profiles, but you also need to apply other physics to come to a steady state. We're trying to plug GYRO into a massive modeling loop which includes the other relevant physics required to evolve the temperature and density. General Atomics has a SciDAC-2 proposal to develop a gyrokinetic feedback scheme relevant to ITER and other reactor-sized plasmas. This is the type of code you need to make real performance predictions for the ITER reactor. It's designed to give very accurate first-principles answers.

HPCwire: How long have you been doing work at ORNL?

Candy: My connection with ORNL started in the summer of 2002, when ORNL's IBM Power4-based “Cheetah” supercomputer came on line. Through a SciDAC contract, General Atomics started doing simulations on the IBM system. When the Cray X1 arrived in 2003, ORNL's Mark Fahey ported GYRO to it and had a lot of success without a lot of suffering even at this early X1 stage. A lot of the credit for that goes to Mark. GYRO's very portable, so ORNL likes to test new machines with it. They ported GYRO to the Cray XT3 very early on, and it revealed some XT3 ramp-up issues. GYRO really stresses architectures.

HPCwire: How much work will you be doing on the ORNL systems?

Candy: We got about 400,000 hours on the X1E on top of our base National Leadership Computing Facility allocation of 440,000, so in total we have almost a million hours at ORNL.

HPCwire: How did you first learn about the INCITE program?

Candy: We were previously aware of the Department of Energy's INCITE program. With Mark Fahey's encouragement, we submitted INCITE proposals and Mark's advice really helped.

HPCwire: Why do your work at ORNL?

Candy: We had been using NERSC, mainly. As a point of history, General Atomics people founded the San Diego Supercomputer Center in the mid 1980s, and GYRO was first run there. By 2002, the only NERSC machine useful for what we were doing was “Seaborg.” We weren't getting much throughput at NERSC and heard about the ORNL “Cheetah” machine, and we got tremendous throughput from that. The availability of the substantially more powerful Cray machines made ORNL even more attractive for our work. ORNL has an extremely receptive, helpful staff that jumps immediately on problems. We still use NERSC, but markedly less. The ORNL experts are great, because our users can go to them with questions. I deal with ORNL computer scientists, not physicists.

HPCwire: Do you use only the Cray X1E or the Cray XT3, too?

Candy: The X1E is where we have our account, and it's a great system. We've been able to scale GYRO to the full X1E. It also performs well on the XT3, but we don't have an account on that machine. Progress is tremendous now. We're doing a study that for the first time couples electron and ion-scale turbulence. Normally, people do much smaller, electron-scale simulations. The simulation we're running requires almost the entire X1E and takes five to six days to run one iteration. It's a great experience.

HPCwire: How do you feel about the INCITE program?

Candy: INCITE is absolutely crucial for our current research program. It's a great program.

HPCwire: Anything to add, Jeff?

Candy: I just want to repeat that Mark Fahey of ORNL has been a crucial person in this effort, especially for code optimization. He sees things we sometimes don't. I have nothing but great things to say about him.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

How Nvidia Could Use $700M Run.ai Acquisition for AI Consumption

May 6, 2024

Nvidia is touching $2 trillion in market cap purely on the brute force of its GPU sales, and there's room for the company to grow with software. The company hopes to fill a big software gap with an agreement to acquire R Read more…

2024 Winter Classic: Oak Ridge Score Reveal

May 5, 2024

It’s time to reveal the results from the Oak Ridge competition module, well, it’s actually well past time. My day job and travel schedule have put me way behind, but I am dedicated to getting all this great content o Read more…

2024 Winter Classic: Meet Team Lobo

May 5, 2024

This is the other team from University of New Mexico, since there are two, right? This team has some significant cluster competition experience with two veterans of previous Winter Classic and SC events. It’s a nice mi Read more…

2024 Winter Classic: Meet Team UC Santa Cruz

May 4, 2024

It was a quiet Valentine’s Day evening when I interviewed the UC Santa Cruz team. Since none of us seemed to have any plans, it seemed like a good time to do it. But there was some good news for the Santa Cruz team Read more…

2024 Winter Classic: Meet the Roadrunners

May 4, 2024

This is the other team from the University of New Mexico. I mistakenly thought that one of their team members was going to make history by being the first competitor to compete for two different schools – but I was wro Read more…

2024 Winter Classic: Meet Channel Islands “A”

May 3, 2024

This is the second team from California State University, Channel Islands – or maybe it’s the first team? Not sure, but I do know they have two teams total, and this is one of them. As you’ll see in the video in Read more…

How Nvidia Could Use $700M Run.ai Acquisition for AI Consumption

May 6, 2024

Nvidia is touching $2 trillion in market cap purely on the brute force of its GPU sales, and there's room for the company to grow with software. The company hop Read more…

Hyperion To Provide a Peek at Storage, File System Usage with Global Site Survey

May 3, 2024

Curious how the market for distributed file systems, interconnects, and high-end storage is playing out in 2024? Then you might be interested in the market anal Read more…

Qubit Watch: Intel Process, IBM’s Heron, APS March Meeting, PsiQuantum Platform, QED-C on Logistics, FS Comparison

May 1, 2024

Intel has long argued that leveraging its semiconductor manufacturing prowess and use of quantum dot qubits will help Intel emerge as a leader in the race to de Read more…

Stanford HAI AI Index Report: Science and Medicine

April 29, 2024

While AI tools are incredibly useful in a variety of industries, they truly shine when applied to solving problems in scientific and medical discovery. Research Read more…

IBM Delivers Qiskit 1.0 and Best Practices for Transitioning to It

April 29, 2024

After spending much of its December Quantum Summit discussing forthcoming quantum software development kit Qiskit 1.0 — the first full version — IBM quietly Read more…

Shutterstock 1748437547

Edge-to-Cloud: Exploring an HPC Expedition in Self-Driving Learning

April 25, 2024

The journey begins as Kate Keahey's wandering path unfolds, leading to improbable events. Keahey, Senior Scientist at Argonne National Laboratory and the Uni Read more…

Quantum Internet: Tsinghua Researchers’ New Memory Framework could be Game-Changer

April 25, 2024

Researchers from the Center for Quantum Information (CQI), Tsinghua University, Beijing, have reported successful development and testing of a new programmable Read more…

Intel’s Silicon Brain System a Blueprint for Future AI Computing Architectures

April 24, 2024

Intel is releasing a whole arsenal of AI chips and systems hoping something will stick in the market. Its latest entry is a neuromorphic system called Hala Poin Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire