PSC: ByteBoost Workshop Accelerates HPC Skills and Advances Computational Research

September 30, 2024

Sept. 30, 2024 — Students from across the U.S. gathered at the Pittsburgh Supercomputing Center (PSC) in August to present their artificial intelligence (AI) research projects as part of the 2024 ByteBoost Summer Workshop. These projects studied critical topics as diverse as engineering and discovering new drugs, understanding congressional policy outcomes, and safely controlling traffic for small, individually owned aircraft.

ByteBoost students and PSC staff listen as Derek Simmel, Senior Information Security Officer, gives a tour of the PSC Machine Room.

The ByteBoost program brought 24 students to deepen their knowledge and play with innovative technologies, such as Neocortex at PSC, ACES at Texas A&M University, and Ookami at Stony Brook University, cyberinfrastructure testbeds in the NSF’s ACCESS program. Students got a chance to attend deep technical presentations, run hands-on exercises, consult with experts, propose research projects, and visit the PSC machine room.

Congressional Policy Analysis using ML and HPCA

Team CivixLAB: Gogoal Falia, Texas A&M; Disha Ghoshal, Stony Brook University; Kundan Kumar, Iowa State University; Aakashdeep Sil, Texas A&M University (Mentors: Dana O’Connor and Paola Buitrago, PSC)

This team explored how policy making in the U.S. Congress evolved between 1973 and 2024, and how those changes reflected shifts in social challenges in the country. They chose Neocortex to pursue this project because of its capabilities with Big Data machine learning.

A major challenge for the project was the unconventional nature of the initial data. The team had to take public policy documents including tax, finance, health, and climate legislation, first classifying and then summarizing the bills so that they could be expressed as data that a machine learning algorithm could recognize.

The CivixLAB team successfully summarized the documents and generated data files that enabled them to begin pre-training their AI — the most massive, initial step of AI learning that depends on ingesting large amounts of data to understand them in a broad sense. Next the team would like to complete the pre-training step, fine tuning their AI to produce insights on how legislation and social developments interact with each other.

Bringing DeepPath to IPUs

Andrew Pang, Georgia Tech (Mentor: Zhenhua He, Texas A&M University)

This project used Texas A&M’s ACES system to explore transitional states in protein activity. Proteins control chemical reactions important to life processes to a large extent by making transitional states between reagents and the desired end products easier to reach. If scientists can better understand how proteins change their shapes and those of their reagents to favor desired transitional states, they can engineer the proteins to perform more useful tasks, both medically and scientificallys.

Pang deployed the DeepPath AI method for identifying transition states on ACES because of the system’s unique intelligence processing units (IPUs). Designed by the Graphcore company to handle AI learning tasks more efficiently than the current state-of-the-art graphics processing units (GPUs), these processors feature prominently in ACES, which itself was designed as a composable supercomputer that routes computations to the most efficient type of processor for each step of a computation.

The project successfully deployed the first step of DeepPath, called Energy Critic, on ACES. This step of the AI calculates the energy required for the protein to reach a given state, with lower-energy states being easier to reach. Future work would involve running the other steps of DeepPath — Explorer, which explores the likelihood of different states using Energy Critic, and Structure Builder, which predicts the larger-scale protein movements necessary to reach these states — and deploying DeepPath on other leading-edge AI computers, particularly PSC’s Neocortex system.

The ByteBoost bus on the way to tour the PSC Machine Room.

ML-Based Drug Discovery against Dengue Serotype 1 Virus Evaluated Using MD Simulations

Team ViroML: Aadhil Haq, Texas A&M University; Bernard Moussad, Virginia Tech; Eneye Ajayi, University of Southern California; Kamrun Nahar Keya, Iowa State University; Oriana Silva, University of North Texas (Mentor: Wes Brashear, Texas A&M University)

The ViroML team used ACES to explore possible new drugs to combat Dengue fever, a sometimes-deadly, mosquito-transmitted infection that affects 100 to 400 million people every year. Over 40 percent of the world’s population is at risk from Dengue.

The team’s strategy was to use AI to analyze known peptide molecules that stick to the Dengue virus’s DENV envelope protein, which the virus uses to attach to human cells. Using that information, they could generate yet-undiscovered peptides that are more effective at interfering with virus attachment and infection.

Running AI software that handled both the structure of the envelope protein and generated peptides to bind with it on ACES, the students successfully achieved generation of peptides and simulated docking between the protein and peptides. Some of their generated candidates were competitive with known blockers of one of the serotypes of the virus. Future work would include refining and optimizing their simulations to identify even better virus blockers, finding blockers for other serotypes of the virus, and using AI to automate discovery of larger numbers of viral blockers.

Optimizing Vertiport Networks with Mobile Location Data and Agent-Based Modeling

Team SkySync: Behnam Tahmasbi, University of Maryland, College Park; Caden Empey, University of Pittsburgh; Darshan Sarojini, University of California, San Diego; Farnoosh Roozkhosh, University of Georgia; Matthew Moreno, University of Michigan; Wenyu Wang, Ohio State University (Mentors: Eva Siegmann, Stony Brook University, and Zhenhua He, Texas A&M University)

The combination of AI and rotor-craft technologies developed for aerial drones has helped bring the prospect of making commonplace ownership of small aircraft feasible. The SkySync team analyzed how AI working on real-time data could be used to control networks of many small aircraft so that a crowded sky can be safe and efficient. They ran their agent-based simulations on ACES and Stony Brook University’s Ookami system.

Ookami features ARM processors — originally developed for smartphones and tablets — in a novel architecture designed to improve AI learning over GPU technology. Like ACES, its design is particularly friendly to installing and running the Conda environments the team members chose to employ. Initial results indicated that their model, which features AI-guided simulated aircraft, could calculate safe routes that brought each aircraft from origin to desired destination, with possible reductions in transportation costs and time.

Future work would include expanding the scale of their simulations to more densely populated areas, including airborne taxis. The team members would also like to enhance their techniques to make the vertiport networks more efficient and make the agents more complex and realistic, so their simulations are more accurate.

Sparse Matrix Operations

Team Cheetah: Omid Asudeh, University of Utah (Mentor: Eva Siegmann, Stony Brook University)

The Cheetah team explored sparse matrix operations in AI learning. Sparse matrices are data sets that have a lot of zero values. Many real-world applications for AI, such as large language models that underlie AI tools like ChatGPT, have this property. This is a challenge, because using computer storage and memory to handle a lot of zeros carries a huge computational cost. These problems benefit from compressing the data to store only the non-zero values, which promises huge improvements in speed and cost of AI operations. But such compression requires sophisticated computation.

Asudeh successfully deployed the TACO SpMM sparse matrix software on Ookami. He was able to explore the tool’s ability to handle compressed sparse matrix data effectively, despite changing a number of characteristics of the data.

Future work on the project would explore running other sparse matrix software on Ookami. Asudeh would also like to run the work on Ookami’s CPU and GPU nodes, to measure how much the advanced ARM nodes are accelerating the computations.

PSC’s Paola Buitrago, Director of Artificial Intelligence & Big Data, and Derek Simmel, Senior Information Security Officer, present at ByteBoost.

Performance Evaluation of Intelligent System for the Detection of Wild Animals in Nocturnal Period for Automobile Application in Multiple Systems

Yuvaraj Munian, Texas A&M University (Mentor: Dana O’Connor, PSC)

Munian used Neocortex to tackle the tricky task of detecting wildlife as it approaches an oncoming vehicle from the side, to prevent collisions. In the U.S., about 200 humans are killed and 26,000 are injured each year from collisions with wildlife, with more than $8 billion in property damage.

Neocortex was Munian’s system of choice for the work, as the computer’s architecture lent itself to rapid learning by the convolutional neural network (CNN) approach needed to rapidly classify and recognize animals at the side of the road. He trained the CNN on a visual data set in which animals had been identified. He is now ready to test it against data without labeling.

In addition to completing testing on Neocortex, Munian would like to repeat the work on ACES, to determine whether either of the two novel AI architectures provide faster learning.

Benchmarking Flash Attention 2/3 against Cerebras CS2

Team FlashBert: Atharva Joshi, University of Southern California, Ritvik Prabhu, Virginia Tech (Mentor: Mei-Yu Wang, PSC)

The FlashBert team focused on methods for making large language models (LLMs) more efficient and energy-saving by comparing them on PSC’s Neocortex and ACES machines. LLMs underlie some remarkable recent advances in AI, most notably in applications such as ChatGPT. But they are expensive to run in terms of the computing power needed. They are extremely energy-expensive as well.

The team first deployed their Flash Attention Model on Neocortex. The PSC system was built around two of the Cerebras company’s CS2 Wafer Scale Engines (WSEs), a new approach to AI that connects trillions of processors in a single, dinner-plate-sized wafer. In Neocortex, the WSEs are coordinated by a cutting edge, high-data-capacity HPE CPU server, providing enhanced ability to work with massive data. The arrangement holds promise in greatly accelerating communication between the processors, which in turn speeds AI learning. FlashBert’s aim was to then deploy on ACES, whose flash memory could potentially make some aspects of the learning even faster.

The team ran their code successfully on Neocortex. They also began developing the code as it would run on ACES. Future work would include finishing the ACES deployment and also testing a sparse neural network version on Ookami, whose architecture might offer similar performance with less power required.


Source: Ken Chiacchia, PSC

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

AMD Announces Flurry of New Chips

October 10, 2024

AMD today announced several new chips including its newest Instinct GPU — the MI325X — as it chases Nvidia. Other new devices announced at the company event in San Francisco included the 5th Gen AMD EPYC processors, Read more…

NSF Grants $107,600 to English Professors to Research Aurora Supercomputer

October 9, 2024

The National Science Foundation has granted $107,600 to English professors at US universities to unearth the mysteries of the Aurora supercomputer. The two-year grant recipients will write up what the Aurora supercompute Read more…

VAST Looks Inward, Outward for An AI Edge

October 9, 2024

There’s no single best way to respond to the explosion of data and AI. Sometimes you need to bring everything into your own unified platform. Other times, you lean on friends and neighbors to chart a way forward. Those Read more…

Google Reports Progress on Quantum Devices beyond Supercomputer Capability

October 9, 2024

A Google-led team of researchers has presented more evidence that it’s possible to run productive circuits on today’s near-term intermediate scale quantum devices that are beyond the reach of classical computing. � Read more…

At 50, Foxconn Celebrates Graduation from Connectors to AI Supercomputing

October 8, 2024

Foxconn is celebrating its 50th birthday this year. It started by making connectors, then moved to systems, and now, a supercomputer. The company announced it would build the supercomputer with Nvidia's Blackwell GPUs an Read more…

ZLUDA Takes Third Wack as a CUDA Emulator

October 7, 2024

The ZLUDA CUDA emulator is back in its third invocation. At one point, the project was quietly funded by AMD and demonstrated the ability to run unmodified CUDA applications with near-native performance on AMD GPUs. Cons Read more…

NSF Grants $107,600 to English Professors to Research Aurora Supercomputer

October 9, 2024

The National Science Foundation has granted $107,600 to English professors at US universities to unearth the mysteries of the Aurora supercomputer. The two-year Read more…

VAST Looks Inward, Outward for An AI Edge

October 9, 2024

There’s no single best way to respond to the explosion of data and AI. Sometimes you need to bring everything into your own unified platform. Other times, you Read more…

Google Reports Progress on Quantum Devices beyond Supercomputer Capability

October 9, 2024

A Google-led team of researchers has presented more evidence that it’s possible to run productive circuits on today’s near-term intermediate scale quantum d Read more…

At 50, Foxconn Celebrates Graduation from Connectors to AI Supercomputing

October 8, 2024

Foxconn is celebrating its 50th birthday this year. It started by making connectors, then moved to systems, and now, a supercomputer. The company announced it w Read more…

The New MLPerf Storage Benchmark Runs Without ML Accelerators

October 3, 2024

MLCommons is known for its independent Machine Learning (ML) benchmarks. These benchmarks have focused on mathematical ML operations and accelerators (e.g., Nvi Read more…

DataPelago Unveils Universal Engine to Unite Big Data, Advanced Analytics, HPC, and AI Workloads

October 3, 2024

DataPelago this week emerged from stealth with a new virtualization layer that it says will allow users to move AI, data analytics, and ETL workloads to whateve Read more…

Stayin’ Alive: Intel’s Falcon Shores GPU Will Survive Restructuring

October 2, 2024

Intel's upcoming Falcon Shores GPU will survive the brutal cost-cutting measures as part of its "next phase of transformation." An Intel spokeswoman confirmed t Read more…

How GenAI Will Impact Jobs In the Real World

September 30, 2024

There’s been a lot of fear, uncertainty, and doubt (FUD) about the potential for generative AI to take people’s jobs. The capability of large language model Read more…

Shutterstock_2176157037

Intel’s Falcon Shores Future Looks Bleak as It Concedes AI Training to GPU Rivals

September 17, 2024

Intel's Falcon Shores future looks bleak as it concedes AI training to GPU rivals On Monday, Intel sent a letter to employees detailing its comeback plan after Read more…

Granite Rapids HPC Benchmarks: I’m Thinking Intel Is Back (Updated)

September 25, 2024

Waiting is the hardest part. In the fall of 2023, HPCwire wrote about the new diverging Xeon processor strategy from Intel. Instead of a on-size-fits all approa Read more…

Ansys Fluent® Adds AMD Instinct™ MI200 and MI300 Acceleration to Power CFD Simulations

September 23, 2024

Ansys Fluent® is well-known in the commercial computational fluid dynamics (CFD) space and is praised for its versatility as a general-purpose solver. Its impr Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…

Shutterstock_1687123447

Nvidia Economics: Make $5-$7 for Every $1 Spent on GPUs

June 30, 2024

Nvidia is saying that companies could make $5 to $7 for every $1 invested in GPUs over a four-year period. Customers are investing billions in new Nvidia hardwa Read more…

Shutterstock 1024337068

Researchers Benchmark Nvidia’s GH200 Supercomputing Chips

September 4, 2024

Nvidia is putting its GH200 chips in European supercomputers, and researchers are getting their hands on those systems and releasing research papers with perfor Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Leading Solution Providers

Contributors

IBM Develops New Quantum Benchmarking Tool — Benchpress

September 26, 2024

Benchmarking is an important topic in quantum computing. There’s consensus it’s needed but opinions vary widely on how to go about it. Last week, IBM introd Read more…

Intel Customizing Granite Rapids Server Chips for Nvidia GPUs

September 25, 2024

Intel is now customizing its latest Xeon 6 server chips for use with Nvidia's GPUs that dominate the AI landscape. The chipmaker's new Xeon 6 chips, also called Read more…

Quantum and AI: Navigating the Resource Challenge

September 18, 2024

Rapid advancements in quantum computing are bringing a new era of technological possibilities. However, as quantum technology progresses, there are growing conc Read more…

Google’s DataGemma Tackles AI Hallucination

September 18, 2024

The rapid evolution of large language models (LLMs) has fueled significant advancement in AI, enabling these systems to analyze text, generate summaries, sugges Read more…

IonQ Plots Path to Commercial (Quantum) Advantage

July 2, 2024

IonQ, the trapped ion quantum computing specialist, delivered a progress report last week firming up 2024/25 product goals and reviewing its technology roadmap. Read more…

Microsoft, Quantinuum Use Hybrid Workflow to Simulate Catalyst

September 13, 2024

Microsoft and Quantinuum reported the ability to create 12 logical qubits on Quantinuum's H2 trapped ion system this week and also reported using two logical qu Read more…

US Implements Controls on Quantum Computing and other Technologies

September 27, 2024

Yesterday the Commerce Department announced export controls on quantum computing technologies as well as new controls for advanced semiconductors and additive Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire