At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of IceCube, an Antarctic observatory dedicated to detecting and analyzing neutrinos – quiet, mysterious particles spawned by nuclear reactions that almost never interact with matter. The weekend before the biggest HPC conference of the year, SC19 in Denver, researchers at IceCube Science leveraged around 51,000 cloud-based GPUs to help understand the data collected by IceCube’s massive sensor array.
“For the first time in human history, we have instruments on the ground that can measure neutrinos, can measure gravitational waves, and can measure different frequencies of light in order to look at celestial phenomena,” explained Frank Wuerthwein, lead for high-throughput computing at the San Diego Supercomputer Center (SDSC), a professor of physics at the University of California San Diego and executive director of the Open Science Grid, in an interview with HPCwire. “The big picture is to study the most violent events in the universe … The idea is that if you have multiple types of detection mechanisms, you can unravel what exactly is going on to make these violent events.”
The ice-based sensors detect the signatures of neutrinos passing by them, collecting data from the shockwaves that neutrinos send rippling throughout the ancient ice sheets. So where does computing enter the process? “They need to understand the ice properties,” Wuerthwein said, “and that’s done with simulation.”
Building an experiment
This became the basis for Wuerthwein’s grand experiment: using IceCube’s science goals as a basis for attaining the largest scale ever achieved in cloud-based simulations on GPUs. The experiment had three objectives: producing data that would actually be used for scientific purposes; learning the extent to which organizations could burst at very large scales; and learning the global capacity of GPUs in the cloud. The burst, he said, would achieve about a month’s worth of simulation work for IceCube Science in a single hour.
Originally, they set out to use Amazon Web Services (AWS) for the experiment. As it turned out, they reached the upper bounds of AWS’ availability – and, it turned out, the upper bounds of the planet’s availability. “It went from doing an exaflop-hour in AWS,” Wuerthwein said, “into buying the entire capacity of GPUs across AWS, Microsoft Azure and Google Cloud, because only when we buy the entire global capacity do we reach the scale that we’re shooting for.” In fact, even with all three cloud providers in play, the capacity fell short of the target of 80,000 Nvidia V100 GPUs (“Call me greedy, or ambitious,” Wuerthwein said).
Taking over the world’s GPUs
Wuerthwein laid out the plan like it was a heist. “So, we’re going to try to get all the available GPUs on the planet – and we’re going to be evicted any time anybody else wants any GPU anywhere, basically,” he said. Eight generations of Nvidia GPUs were in play, including the V100, the P100, the P40, the P4, the T4, the M60, the K80 and the K520. Each was handling an IceCube data load tailored to its capabilities, and the workloads were designed to take just 15 to 30 minutes in order to reduce the risk of being booted off GPUs due to sudden demand.
Coordinating the constituent cloud providers, Wuerthwein said, took some doing – as did getting information about what to expect on the day in question. “It took some convincing in some cases to give us some information about what to expect,” he said. The scope was huge: 28 cloud regions across three continents (North America, Europe and Asia).
Such a massive experiment, of course, required teamwork. Wuerthwein highlighted the efforts of Igor Sfiligoi (lead scientific software developer at SDSC) and David Schultz (filtering programmer at the Wisconsin IceCube Particle Astrophysics Center [WIPAC]), who were crucial to making the experiment a technical reality, as well as Benedikt Riedel (computing manager at WIPAC), who helped Wuerthwein to coordinate the agencies involved.
The experiment was funded by a National Science Foundation (NSF) grant for almost $300,000. For the first day of burst simulations, which were conducted on Saturday (the lightest day for expected load, Wuerthwein explained), the team expected to spend around $120,000 to $150,000, with the remainder going to a second day planned for a quiet demand period around Thanksgiving or Christmas.
Prior to the experiment, the team ran scalability tests on a few thousand GPUs for an hour or so at a time on individual providers. And then, finally, on November 16th, they did it. While at SC19, Frank revealed the results: a peak of 51,000 GPUs operating in tandem in a single HTCondor pool, all running IceCube’s simulations. “At peak,” Wuerthwein wrote, “our cloud-based cluster provided almost 90% of the performance of Summit, at least for the purpose of IceCube simulations.” Due to budget constraints, the team ramped down the experiment after the two-hour mark – a total success, apart from a couple of hiccups when terminating the jobs.
Wuerthwein hopes that this experiment will pave the way for many other applications. “We have a very, very wide range of different scientific problems,” he said, “all of which could be referred to the same infrastructure we’re using for IceCube. Once we have understood how to do this with IceCube, we can offer it to anybody else as a service.” He was also pleased with the reception at SC19: “Everybody considers what we achieved a huge success,” he said, “despite the fact that we fell way short of what we were shooting for.”
Still, Wuerthwein doesn’t expect such massive bursts to become regular occurrences in the near future. After all, he explained, the cost would skyrocket if users wanted to burst at that scale on a regular basis, rather than just a few times a year. “Right now, it’s once-in-my-lifetime,” he said. “I don’t expect that I will have customers knocking on my door by the dozens who want to do this.”
“I think that this falls into the category of – you do something because you’re pushing boundaries. It’s a heroic calculation.”