DOE Computational Science Graduate Fellowship Celebrates 30 Years of Cultivating Leaders

July 26, 2021

July 26, 2021 — Since the Department of Energy’s Office for Advanced Scientific Computing Research (ASCR) and Office of Science launched the Computational Science Graduate Fellowship (DOE CSGF) in 1991, the program has supported more than 430 scientists’ training and has built a community of leaders who continue to shape this dynamic field.

The program was created with a specific goal: to develop a workforce that could harness the world’s most powerful supercomputers to solve complex science and engineering problems. The fellowship has evolved with technological and scientific changes over the past three decades and has forged connections and collaborations among researchers. Physicists, materials scientists, chemists, biologists, mathematicians, computer scientists and engineers have applied these tools to address computational challenges across industry, academia and government laboratories.

“The program’s always been able to grow and adapt and continue to produce this cohort of really talented leaders in the community who have this broader view of what computational science is about, who then go out and shape what’s going on,” says Jeffrey Hittinger of Lawrence Livermore National Laboratory, a DOE CSGF recipient from 1996 to 2000 and now one of the program’s principal investigators (PIs).

During the 1980s, computers were changing quickly, becoming bigger, faster and more powerful with emerging capabilities for modeling fluid dynamics and other scientific phenomena. “With the rapidly increasing capabilities of these supercomputers, it was becoming clear to many of us that there was a potential for enormous impact if they could be used as tools for scientific discovery,” says David Brown, a fellowship co-PI and director of the Computational Research Division at Lawrence Berkeley National Laboratory who has worked with the DOE CSGF since its beginning. Because only a few facilities, such as major government laboratories, had these machines, the scientists working with them got much of their training on the job. Only a few universities had access to supercomputers and none offered formal training in the emerging field of computational science.

Duke University’s Amanda Randles, a DOE CSGF recipient from 2010 to 2013, will use the new Aurora exascale HPC system to simulate how blood and tumor cells flow through the circulatory system. Image courtesy of Joseph Insley, Argonne National Laboratory.

Key leaders in this emerging field met in 1990 to discuss the problem and craft a solution, leading to the fellowship’s launch the following year. At the time, other prestigious graduate fellowships primarily financed tuition and a provided a stipend, but the DOE CSGF incorporated additional features that remain hallmarks of the program today. For example, all recipients complete at least one practicum experience at a DOE national laboratory. To support broad interdisciplinary training in computational science, each recipient completes a program of study, an individualized course curriculum designed to expand their knowledge and support their research. Fellows also have access to professional development funds that can be used for conference travel or to buy a computer workstation.

James Corones of DOE’s Iowa State University-based Ames Laboratory was a chief architect of DOE CSGF as it exists today and began managing the program in 1993. In 1997, Corones founded the Krell Institute, which oversees the fellowship to this day.

The DOE CSGF held its first fellows’ meeting in Minneapolis in 1993, and the conference has been held yearly since 1999. Now known as the annual program review, this conference lets fellows share their research and connect with practicum hosts, alumni and program sponsors. “Because of the networking that we’ve done through the conferences in the annual review, DOE CSGF fellows and alumni make strong connections with people that they deal with even today,” says Barbara Helland, who is now ASCR’s associate director for science. She worked with Corones to manage the program at both Ames Laboratory and the Krell Institute.

James Hack, a former director of the National Center for Computational Sciences at Oak Ridge National Laboratory, has helped mold the program from the start. He notes that over the past 50 years, “computational capabilities have increased by a factor of 10 billion due to advances in technology and algorithms. I can’t think of any capability has grown that rapidly.”

An early application. Image courtesy of DOE CSGF.

Computers in the early 1990s were powerful enough to handle a growing range of simulation problems in fluid dynamics. In subsequent years, larger and faster computers could go even further, tackling complex simulations of chemical processes, more complete models of the climate system and even genomics and computational biology. “I do think of the computer as a virtual laboratory,” Hack says. As datasets have exploded in size and machine-learning algorithms and technologies have evolved, artificial intelligence is driving another revolution in how researchers use computational science to manage information and drive discovery.

The fellowship has overcome challenges across its many years. Frederick Howes, the fellowship’s program manager at ASCR in the early 1990s, advocated for its continuation when its funding was threatened. More recently, proposed reorganizations of government-supported graduate research fellowships jeopardized the program. The computational science community, including the hundreds of DOE CSGF fellows and alumni, rallied and successfully advocated for the program’s continuation under DOE.

Meanwhile the DOE CSGF has achieved its key purpose: Graduates are shaping the scientific landscape. Alumna Judith Hill (1999-2003), for example, used her DOE CSGF experience, including a practicum at Sandia National Laboratories, to get her first staff position at that laboratory. She has since served in computational leadership positions at Oak Ridge and now is a computational science project leader at Livermore. “As a computational scientist,” Hill says, “it’s useful for me to have not only the domain expertise, applied math in my case, but also the algorithmic computer science expertise­ – all the things that CSGF emphasizes in the program of study, in the practicum experience and in the experiences of the fellows.”

As deputy director of the Materials Project, Berkeley Lab’s Anubhav Jain (2006-2011) uses DOE supercomputers to model new materials for a range of energy challenges, including better batteries. “Computational materials scientists usually don’t receive much formal education about large computers, how they work and what limits they face,” he says, which inhibits researchers’ ability to apply high-performance computing (HPC). “The DOE CSGF brought many of these issues to the forefront and helped me develop an appreciation and understanding of computer science issues in large calculations.” In 2015, he received a $2.5 million DOE Early Career Research Program award focused on using high-throughput computations to discover new materials.

This year, the DOE CSGF supports 95 fellows pursuing graduate degrees at 40 different institutions. The incoming class this fall will comprise a record 32 fellows, half of whom are women. Nearly half come from groups that have been underrepresented in science, technology, engineering and mathematics.

ASCR’s Helland refers to computational science as science’s third leg, with experiment/observation and theory. “Having people who understand how to take the physical world and explain it in mathematical terms and in applications code – that need is never going away.”

Click here to learn more.


Source: DOE ASCR

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire