TACC Selects 21 Scientific Codes For New HPC Software Improvement Program

May 3, 2022

May 3, 2022 — The Texas Advanced Computing Center (TACC) announced the set of 21 codes and ‘grand challenge’-class science problems that will receive funding through the “Characteristic Science Applications” program.

The applications, identified by the community of large-scale scientific computing users, reflect the broad range of science domains and computational approaches— from language, to method, to workflow — that researchers will run on future supercomputers.

A full-atom model of the SARS-CoV-2 envelope, which contains key integral proteins, like the renowned spike trimer. Though microscopic, simulations of viruses involve hundreds of millions to billions of atoms, pushing the limits of modern supercomputing capabilities. Credit: Eric Shinn, Tianle Chen, Karan Kapoor, Noah Trebesch, Aaron Chan, Moeen Meigooni & Emad Tajkhorshid, Theoretical and Computational Biophysics Group, NIH Center for Macromolecular Modeling and Bioinformatics, University of Illinois at Urbana-Champaign.

Among the applications are software for large international experiments like the IceCube Neutrino Observatory; widely-used codes from the earthquake and astrophysics communities; and custom codes that explore new approaches to machine learning and black hole modeling. [See full list of funded applications below].

The Characteristic Science Applications will be part of the planning and early science program for the Leadership Class Computing Facility (LCCF) at TACC. Funding from the program comes from the National Science Foundation (NSF).

The program received 140 submissions, covering all areas of science, and involving 167 institutions in 38 states. These were reviewed by TACC’s HPC experts and an independent NSF panel and chosen based on the scientific significance of the problem they aim to solve; whether it adds to the representation of scientific codes that the LCCF facility will likely serve; and the current ability to solve the problem.

“Extensive engagement with the diverse research community is critical to the design of LCCF,” said Manish Parashar, director of NSF’s Office of Advanced Cyberinfrastructure. “NSF appreciates the overwhelming response from the community to the CSA program. This will ensure that the future facility will have the broadest impact and sustain our nation’s leadership in science and engineering.”

If awarded, the LCCF will deploy a high performance computing system in the 2026 timeframe with 10 times the capability of TACC’s Frontera supercomputer — currently the 13th fastest in the world and the most powerful at any university.

The IceCube Neutrino Observatory is the first detector of its kind, designed to observe the cosmos from deep within the South Pole ice. An international group of scientists responsible for the scientific research makes up the IceCube Collaboration. The Characteristic Science Application program will help IceCube researchers incorporate AI into their data processing and analysis schemes. Credit: Yuya Makino, IceCube/NSF

Part of the ten-fold improvement will come through improvements in code performance. This, TACC hopes, will be accomplished through close collaborations between academic software development teams and HPC performance experts at TACC. The close collaborations will also help TACC build a system uniquely suited to the needs of scientists and engineers.

“The ultimate design goal of the LCCF is to increase the pace of scientific exploration,” Stanzione said. “The CSA projects serve as representatives for the set of problems the LCCF will address over its operational life. In that way, they are also a design driver for the facility, guiding the technology and service choices that comprise the LCCF.”

The projects will compute on Frontera, Longhorn (TACC’s large GPU-based system), and numerous testbeds of alternative or experimental hardware that are, or will be, available to the research teams in the coming years.

TACC expects a wealth of lessons learned and practical experience will stem from the program. These will enable the development of targeted training, supporting the goal of getting researchers ready to run effectively on LCCF on the first day of operations.

The 21 teams selected will each be awarded $150,000 for the first year of study and design, with a commitment from the science team to collaborate with the LCCF project to improve the code and prepare it for the candidate architecture. In total, NSF awarded TACC $7 million over two years to support the CSA program.

“Far too often, technologies and systems fail to deliver their full potential because the end users were insufficiently engaged,” said John West, TACC deputy director and co-principal investigator on the award. “Deep engagement on a future system requires significant investment of time and energy on the part of the scientists. We have constructed the CSA program to create both incentives and ‘skin in the game’ for the selected applications teams. Each will receive multi-year funding to make sure their proposed problem is relevant and ready to run when the LCCF becomes a reality.”

Teams making sufficient progress may be renewed for a second year of funding at the same level during final design. Ten to 15 teams will enter the construction phase of the LCCF project and be funded for approximately 30 months as the Characteristic Science Applications are demonstrated on the LCCF’s HPC resources.

One of the research projects selected aims to improve multi-messenger astrophysics efforts at the IceCube Neutrino Observatory.

“IceCube is undergoing an evolution as we incorporate AI into our data processing and analysis schemes,” said Benedikt Riedel, IceCube Global Computing Coordinator. “The CSA program will give us hands-on experience with the newest cyberinfrastructure and allow us to determine the best ways to use that cyberinfrastructure in the future.”

Another CSA project, led by Emad Tajkhorshid, Director of the NIH Center for Macromolecular Modeling and Bioinformatics at the University of Illinois at Urbana-Champaign, will drive the development of the simulation software, NAMD, for very large molecules, and give his team the opportunity to improve a code used by thousands of researchers and accelerate the pace of discovery in his field.

“We’re also very excited to undertake a highly risky project,” he said, “namely modeling and doing full simulations of viruses, which would not be possible without this opportunity.”

For a full list of the awardees, visit Aaron Dubrow’s original article at this link.

The “Characteristic Science Applications for the Leadership Class Computing Facility” project is supported by the National Science Foundation award #2139536.


Source: Aaron Dubrow, TACC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire