As the 2022 Great American Supercomputing Road Trip carries on, it’s Sandia’s turn. It was a bright sunny day when I rolled into Albuquerque after a high-speed run from Los Alamos National Laboratory. My interview subjects were already in the room and ready to go. After deploying the mics, we got started with introductions.
First up was Craig Vineyard, a research scientist specializing in machine learning and neuromorphic computing. Then we had Rob Hoekstra, senior manager for Extreme Scale Computing at the lab. Next up is Sivasankaran Rajamanickam, who works on modeling/simulation, and machine learning. Rounding out the pack is James Laros, who is a distinguished staff member at Sandia National Laboratories, and a computer architect who leads several advanced computing projects at the lab.
So what did we talk about? Pretty much everything, ranging from their current compute infrastructure to the lab’s role as a test bed for emerging technologies. That’s one of the most interesting aspects of Sandia, they’re always trying out different hardware and software to see if it has potential for greater things.
They pioneered Arm for supercomputers, solid state memory devices, and they’re currently working with many different types of accelerators (along with other hardware) to see if they’re ready for prime time in tomorrow’s mainstream systems. Sandia is also giving a Cerebras wafer-scale system a try to see if it can handle simulation workloads.
In the interview, we also talked some composable infrastructure and CXL tech. The lab sees promise in this approach but doesn’t necessarily solve the most pressing problems – like memory bandwidth – that we’re facing in the industry.
Check out the interview for more meaty content from Sandia. And stay tuned for more lab visits….