ALCF Summer Students Gain Hands-on Experience with HPC

October 4, 2022

Oct. 4, 2022 — As part of the ALCF’s summer student program, over 30 undergraduate and graduate students worked alongside staff mentors to gain real-world experience with supercomputing, data science, and AI projects.

ALCF summer students (left to right) Sirak Negash, Alina Kanayinkal, Ryien Hosseini, Alan Wang, and Saumya Singh.

Every summer, the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy, Office of Science user facility located at DOE’s Argonne National Laboratory, hosts a new group of students to take on real-world scientific computing projects, providing valuable opportunities to work with research teams and learn new skills.

“It’s important to provide educational and career opportunities for students to take their next steps, gain confidence, and have new experiences working on impactful research projects outside of the classroom,” says Michael Papka, ALCF director and professor of computer science at the University of Illinois Chicago. “Our summer student program gives them the chance to see possibilities of what their careers could look like.”

This year’s class of ALCF summer students, which included more than 30 students ranging from undergraduates to doctoral candidates, tackled projects aimed at using artificial intelligence (AI) to analyze bird songs, visualizing large scientific datasets, advancing high energy physics research, and more. In the summaries below, five of the students spoke about what they worked on this summer and where they think the experience will take them next.

AI Analysis of Bird Audio

Saumya Singh, a graduate student studying AI at Northwestern University, is interested in researching self-supervised learning and reinforcement learning in AI and natural language processing. This summer she worked with mentors Michael Papka and Argonne computer scientist Nicola Ferrier on a project that used AI to analyze bird song audio collected from microphones in forests to provide insights into their ecosystems.

Singh was drawn to this project because of its significance to the environment and what it can reveal about forest ecosystems. “Birds or animals are a great predictor of the environment that they’re living in,” she says.

Using a new algorithm launched by Facebook AI Research for the analysis, her project employed self-supervised learning, which means the algorithm did not require labels to be provided by researchers.

“The main thing that I feel is going to help me is self-supervised learning because the main problem that we have for any of the data science projects is the pre-processing data labeling, so it will be great if we can solve the problem,” Singh says. “I can apply it to several other projects.”

Having previously worked with images and text, this project provided the opportunity to work with sound, large datasets, and new algorithms. “All these new techniques that I worked on,” Singh says, “seemed to be really fruitful for me to continue ahead in this data science-machine learning career path.”

Command-Line Interface, Python concurrency, and AI models

Alan Wang, a computer science student at the University of Illinois, was interested in working at the ALCF because of the powerful supercomputers and software tools it makes available for research. Though mostly interested in system security, Wang’s research at the ALCF has spanned facility operations, the Python programming language, and AI.

This summer he worked on three projects with ALCF mentors Paul Rich, George Rojas, and Bill Allcock: a command-line interface project aimed at making it easier for system administrators to carry out searches on the home directory for all of ALCF; a Python concurrency project comparing the speeds and performances of different currency libraries; and a project running AI models that use the open-source machine learning frameworks PyTorch and TensorFlow on the ALCF AI Testbed’s Cerebras and SambaNova systems.

Wang says that one of the most significant things he got out of this summer was learning more about using Python. He began the internship with around five years of Python experience, saying “I thought I had everything down but not even close. So, I learned a lot of Python and got exposed to using it in a lot of different environments.” Wang also was introduced to new software tools, such as the Emacs text editor, and worked with AI for the first time.

“I was surprised how interconnected AI was with systems, so knowing both sides and having an AI background will also be extremely helpful for me in the future,” Wang says.

Benchmarking Graph Neural Networks for Science on AI accelerators  

Ryien Hosseini’s work with the ALCF team was at the intersection of neural network algorithms and high performance computing. “My projects used computing resources in order to see how far we can push these algorithms known as graph neural networks for various scientific applications,” he says.

Hosseini, a graduate student in electrical and computer engineering at the University of Michigan, was interested in working at the ALCF due to the research-oriented nature of the internship, and to have access to the facility’s powerful computational resources. This summer, with ALCF mentors Filippo Simini and Venkat Vishwanath, he co-authored a workshop paper that assessed the performance of graph neural networks on NVIDIA GPUs (graphics processing units) and worked on another project that looked at the performance of graph neural networks on specialized hardware platforms.

In addition, Hosseini contributed to an effort that uses chemical docking for drug discovery. The project builds on previous work because, instead of just using neural networks to select molecules, they now use “the neural network as a pre-filter in order to choose a top percentage of candidates, and then those will go into a classical non-machine learning based algorithm, which is better at arriving at those final numerical estimates,” says Hosseini.

“I feel like I learned a lot both from thinking about high-level research ideas, high-level algorithms, and then really getting into the nitty gritty and doing the programming in order to implement those algorithms,” says Hosseini, who will be applying to PhD programs in the fall. “Having this structured, rigorous research background has really been helpful.”

High-Quality Visualizations for Large Scientific Datasets

Alina Kanayinkal is interested in computer graphics, particularly the computational side of animation. In her summer at the ALCF, she worked with Message Passing Interface, or MPI (a communication protocol for programming parallel computers), and image rendering, continuing the work she began as a student assistant to Tommy Marrinan, an Argonne scientist teaching at the University of St. Thomas.

For her summer project, Kanayinkal’s work focused on creating a workflow for rendering high-quality visualizations of large-scale datasets. Her research aims to leverage cinematic rendering tools (similar to those used by Pixar and Dreamworks) to create visualizations of scientific datasets that are too large or too time consuming to render on a single computer. While the workflow is generic enough for many types of scientific data, Kanayinkal worked with data from a coupled fluid flow and particle simulation to investigate cancer cell transport as well as a molecular dynamics simulation to investigate material friction. The ultimate goal of her studies is to develop an easier and less time-consuming way to create these visualizations.

Kanayinkal says one of her major takeaways from this summer at ALCF was realizing that research is “not a huge, scary thing. It is a big thing, but it’s not so big that it’s overwhelming.” She also has become more comfortable with learning on the fly, for instance learning MPI and the OpenEXR format for imaging applications.

Moving forward she is continuing to work with Marrinan and choosing projects that she enjoys working on, saying “if it’s something that you like, and you get frustrated, you’re just going to take a five-minute break and then come back and continue working on it rather than just being like ‘Forget it. I’m going to do something else.’”

Hyperparameter Optimization and Scaling Studies for ML Models in Physics Research

As a student at the University of Notre Dame, Sirak Negash worked with machine learning (ML) to help analyze data from particle physics experiments. This inspired him to continue pursuing machine learning studies, especially for high energy physics. He initially applied for a position as a summer research aide to gain more experience in physics research. “I was pleasantly surprised when I was contacted for a role at ALCF that involved working with a ML model in physics,” he says.

Collaborating with ALCF mentor Sam Foreman, Negash worked on determining the impact of different hyperparameter configurations on model performance and training cost for simulations of lattice quantum chromodynamics (or the strong interactions between quarks and gluons).

“I was able to complete a detailed set of studies on how scaling the lattice volume impacted the training cost when run on the ALCF’s Theta supercomputer,” Negash says.

The effort has been helpful to the ALCF because future research on quantum chromodynamics “can greatly benefit from an understanding of how the performance of these simulations is scaling with larger and larger lattice size,” he says.

After spending his summer at the ALCF, Negash says he “developed a new appreciation for science beyond the classroom and even beyond a physical lab, and the lessons and skills I have learned through this opportunity in ML research have kindled in me the desire to pursue a career in data analytics.”


Source: Emily Stevens, ALCF

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Quantum Riches and Hardware Diversity Are Discouraging Collaboration

November 28, 2022

Quantum computing is viewed as a technology for generations, and the spoils for the winners are huge, but the diversity of technology is discouraging collaboration, an Intel executive said last week. There are close t Read more…

2022 Road Trip: NASA Ames Takes Off

November 25, 2022

I left Dallas very early Friday morning after the conclusion of SC22. I had a race with the devil to get from Dallas to Mountain View, Calif., by Sunday. According to Google Maps, this 1,957 mile jaunt would be the longe Read more…

2022 Road Trip: Sandia Brain Trust Sounds Off

November 24, 2022

As the 2022 Great American Supercomputing Road Trip carries on, it’s Sandia’s turn. It was a bright sunny day when I rolled into Albuquerque after a high-speed run from Los Alamos National Laboratory. My interview su Read more…

2022 HPC Road Trip: Los Alamos

November 23, 2022

With SC22 in the rearview mirror, it’s time to get back to the 2022 Great American Supercomputing Road Trip. To refresh everyone’s memory, I jumped in the car on November 3rd and headed towards SC22 in Dallas, stoppi Read more…

Chipmakers Looking at New Architecture to Drive Computing Ahead

November 23, 2022

The ability to scale current computing designs is reaching a breaking point, and chipmakers such as Intel, Qualcomm and AMD are putting their brains together on an alternate architecture to push computing forward. The chipmakers are coalescing around the new concept of sparse computing, which involves bringing computing to data... Read more…

AWS Solution Channel

Shutterstock 110419589

Thank you for visiting AWS at SC22

Accelerate high performance computing (HPC) solutions with AWS. We make extreme-scale compute possible so that you can solve some of the world’s toughest environmental, social, health, and scientific challenges. Read more…

 

shutterstock_1431394361

AI and the need for purpose-built cloud infrastructure

Modern AI solutions augment human understanding, preferences, intent, and even spoken language. AI improves our knowledge and understanding by delivering faster, more informed insights that fuel transformation beyond anything previously imagined. Read more…

QuEra’s Quest: Build a Flexible Neutral Atom-based Quantum Computer

November 23, 2022

Last month, QuEra Computing began providing access to its 256-qubit, neutral atom-based quantum system, Aquila, from Amazon Braket. Founded in 2018, and built on technology developed at Harvard and MIT, QuEra, is one of Read more…

Quantum Riches and Hardware Diversity Are Discouraging Collaboration

November 28, 2022

Quantum computing is viewed as a technology for generations, and the spoils for the winners are huge, but the diversity of technology is discouraging collaborat Read more…

2022 HPC Road Trip: Los Alamos

November 23, 2022

With SC22 in the rearview mirror, it’s time to get back to the 2022 Great American Supercomputing Road Trip. To refresh everyone’s memory, I jumped in the c Read more…

QuEra’s Quest: Build a Flexible Neutral Atom-based Quantum Computer

November 23, 2022

Last month, QuEra Computing began providing access to its 256-qubit, neutral atom-based quantum system, Aquila, from Amazon Braket. Founded in 2018, and built o Read more…

SC22’s ‘HPC Accelerates’ Plenary Stresses Need for Collaboration

November 21, 2022

Every year, SC has a theme. For SC22 – held last week in Dallas – it was “HPC Accelerates”: a theme that conference chair Candace Culhane said reflected Read more…

Quantum – Are We There (or Close) Yet? No, Says the Panel

November 19, 2022

For all of its politeness, a fascinating panel on the last day of SC22 – Quantum Computing: A Future for HPC Acceleration? – mostly served to illustrate the Read more…

RISC-V Is Far from Being an Alternative to x86 and Arm in HPC

November 18, 2022

One of the original RISC-V designers this week boldly predicted that the open architecture will surpass rival chip architectures in performance. "The prediction is two or three years we'll be surpassing your architectures and available performance with... Read more…

Gordon Bell Special Prize Goes to LLM-Based Covid Variant Prediction

November 17, 2022

For three years running, ACM has awarded not only its long-standing Gordon Bell Prize (read more about this year’s winner here!) but also its Gordon Bell Spec Read more…

2022 Gordon Bell Prize Goes to Plasma Accelerator Research

November 17, 2022

At the awards ceremony at SC22 in Dallas today, ACM awarded the 2022 ACM Gordon Bell Prize to a team of researchers who used four major supercomputers – inclu Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

RISC-V Is Far from Being an Alternative to x86 and Arm in HPC

November 18, 2022

One of the original RISC-V designers this week boldly predicted that the open architecture will surpass rival chip architectures in performance. "The prediction is two or three years we'll be surpassing your architectures and available performance with... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

AMD Thrives in Servers amid Intel Restructuring, Layoffs

November 12, 2022

Chipmakers regularly indulge in a game of brinkmanship, with an example being Intel and AMD trying to upstage one another with server chip launches this week. But each of those companies are in different positions, with AMD playing its traditional role of a scrappy underdog trying to unseat the behemoth Intel... Read more…

JPMorgan Chase Bets Big on Quantum Computing

October 12, 2022

Most talk about quantum computing today, at least in HPC circles, focuses on advancing technology and the hurdles that remain. There are plenty of the latter. F Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Leading Solution Providers

Contributors

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

September 8, 2022

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

SC22 Unveils ACM Gordon Bell Prize Finalists

August 12, 2022

Courtesy of the schedule for the SC22 conference, we now have our first glimpse at the finalists for this year’s coveted Gordon Bell Prize. The Gordon Bell Pr Read more…

Intel Is Opening up Its Chip Factories to Academia

October 6, 2022

Intel is opening up its fabs for academic institutions so researchers can get their hands on physical versions of its chips, with the end goal of boosting semic Read more…

AMD Previews 400 Gig Adaptive SmartNIC SOC at Hot Chips

August 24, 2022

Fresh from finalizing its acquisitions of FPGA provider Xilinx (Feb. 2022) and DPU provider Pensando (May 2022) ), AMD previewed what it calls a 400 Gig Adaptive smartNIC SOC yesterday at Hot Chips. It is another contender in the increasingly crowded and blurry smartNIC/DPU space where distinguishing between the two isn’t always easy. The motivation for these device types... Read more…

AMD’s Genoa CPUs Offer Up to 96 5nm Cores Across 12 Chiplets

November 10, 2022

AMD’s fourth-generation Epyc processor line has arrived, starting with the “general-purpose” architecture, called “Genoa,” the successor to third-gen Eypc Milan, which debuted in March of last year. At a launch event held today in San Francisco, AMD announced the general availability of the latest Epyc CPUs with up to 96 TSMC 5nm Zen 4 cores... Read more…

Google Program to Free Chips Boosts University Semiconductor Design

August 11, 2022

A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire