Many panels at SC22 focused on how supercomputing centers can help others recover from disasters – but one panel, “Facing the Unexpected: Disaster Management Capabilities,” focused mainly on how supercomputing centers can shield themselves from the impact of disasters. Moderated by Daniel Reed of the University of Utah, the panel featured Anna Maria Bailey of Lawrence Livermore National Laboratory (LLNL); Dieter Kranzlmüller of the Leibniz Supercomputing Centre (LRZ); David Martinez of Sandia National Laboratories (SNL); and Satoshi Matsuoka of Riken.
“Increasingly, we are pieces of critical infrastructure as countries plan for and respond to disasters, and that has some implications both for how we operate our systems – business continuity issues – but also [for] how we work with government agencies and citizens to plan and model and respond to disasters,” Reed said, citing how many disasters are increasing in frequency due to climate change and how supercomputing centers are increasingly dependent on large amounts of power and water. And, of course, not all disasters are created equal: “There are also different kinds of timelines that require different kinds of actions. If you’re modeling hurricanes you actually have a few days to think about that – that’s kind of the art of it. If on the other hand you’ve been pressed to say ‘we have an instant radiological disaster,’ you may have only half an hour to make a decision about what you do.”
Bailey explained how the disaster management conversation has been changing at LLNL in recent years. “When you’re doing project planning you’re accounting for some risks, and you’re looking for some unforeseen conditions,” she said. “And those unforeseen conditions can be something like supply chain issues – maybe there’s a shortage on copper – or maybe you’re going to have some severe weather coming your way, and so forth. But you plan for those and you assign some contingency values to each of those risks and then you go through the project and you retire those risks as they become either real or, if they’re not real, then you can move the contingency around.”
But, she said, the decade has been brutal so far: Covid, supply chain issues (“Things that typically would take weeks to get are taking months to get”), bizarre weather, lightning storms, huge fires (normally not seen so close to Livermore), extreme heat that prohibits air cooling. “We went into the mode of no longer voluntary shedding of load, but having to plan for involuntary shedding of load at our site,” Bailey said, adding that it was the first time that plan was needed in her career. “Risk planning is definitely changing.”
Kranzlmüller struck a similar note, discussing how recent floods close to LRZ had him thinking more about how prepared the center was for previously unlikely risks.
“Strange enough, there is a little river just close by [to the computing center],” he said. “And of course, you never notice – you go just across the bridge to your office, you never think of it. But one day, by accident, I noticed that one of the political parties here had their web servers flooded by a nearby river. So I went to my guys and said: ‘So could this happen to us as well? What happens if the river comes out of its basement?’ And my people told me: ‘That’s not a problem, we are prepared for this, we have a plan on how to react and where to put the sand and how to protect the building from water coming in through the river.’”
So, when the pandemic struck, Kranzlmüller reached out again. “Do we have a reaction scheme for a pandemic?” he asked the team. “And my team said: ‘Yes, it is on this particular webpage – just look it up!’” Kranzlmüller said that, while there were some things in the plan that needed to be adjusted, the fact that a plan did exist allowed them to switch to work-from-home and operate the entire center remotely within 24 hours.
Now, he said, new concerns have come to light: amid skyrocketing fuel prices and scarcity, “There is some fear about potential power outages or even blackouts.” So at LRZ, teams are working to identify the absolute minimum power requirements to avoid damage to the systems, assessing whether diesel would be sufficient for those needs, and outlining clean shutdown and restart procedures for the supercomputers.
In New Mexico’s high desert, Martinez is dealing with very different issues facing Sandia National Laboratories. One of the biggest forest fires in state history had recently consumed miles of the state for at least a month. The fire was far away, Martinez said, but the smoke became a problem. “We weren’t prepared to face the smoke issue with our airside economization,” he said. “We really counted on that. So basically we had to shut it down and leverage our mechanical cooling.” After that, the center began working on deploying advanced smoke detectors. Similarly, the lab has engaged in substantial work on reducing its water needs – amounting to around 15 million gallons of water saved per year – to drastically increase its drought resilience.
As one of the laboratories under the umbrella of the National Nuclear Security Administration (NNSA), Sandia is also particularly sensitive to extreme scenarios such as wartime disruption, and Martinez explained how the lab prepares extensively for power outages. “We simulate outages by the quarter every quarter, and we basically shut all the main power down and go on generation to see how everything operates and see if we’re [coming up short] on any pieces of equipment,” he said, adding that the simulations were “a little bit of a pain” but were helping with readiness.
Supply chain issues, on the other hand: a “nightmare.”
“We’ve always been proactive at Sandia when we’ve designed our systems,” Martinez said. “We usually watch the copper prices and things like that, purchase our large wire and things like that based on the economy and where it’s going so we can save some money.” But recently, Sandia’s construction needs – such as transformers for 5-6MW of additional capacity – have presented lead times as long as a year and a half, necessitating that the lab purchase interim equipment (which typically would not have been up to snuff) to avoid having new supercomputers sitting on the floors, unable to run.
On the proactive side, Martinez said that Sandia is working on procuring more renewable energy. “The goal would be to build our own little … micro-grid and use the grid as a backup,” he said, “because the grid is an aging system throughout the United States.” That aging infrastructure, he added, presented vulnerabilities (e.g. to cyberattacks) for the highly secure lab.
Meanwhile, in Japan, Matsuoka said that Riken faced a plethora of potential disasters. “The country has been quite resilient because we have so many disasters,” he said, and Riken had prepared for nearly all of them. (“We’re not prepared for war, unfortunately, because we have a constitution that prohibits us from engaging in war.”)
Riken’s Center for Computational Science (R-CCS) planned much of this well in advance. First, the center itself was located in Osaka Bay, hidden behind a new airport that would shield it even from major tsunamis. Second, Matsuoka explained, the six- or seven-story center had been built using the earthquake protections of a 50-story skyscraper, including resting the above-ground levels on rubber and metal dampers. The air-handling unit room, he said, was below ground and could serve to capture a massive amount of water from tsunami-induced flooding. And, of course, Riken’s Fugaku supercomputer is situated on the top floor of the center – uncharacteristic for such a large system – to similarly shield it from the immediate impacts of flooding and earthquakes.
Still, even Riken has been facing challenges for which it was less prepared. During Covid, Matsuoka said that they divided their infrastructure team into two groups, so that if one of the groups experienced a Covid infection, the other group remained safe and could continue work. Still an issue, though: typhoons and lightning, which have caused significant downtime and for which Riken has been working on developing mitigation plans.
But, Matsuoka said, by far their biggest crisis at the moment was the rising electricity cost due to the war in Ukraine. Those costs – as detailed in a previous HPCwire article – had forced Riken to shut down 30% of Fugaku and to ask users to slim down their computing needs. Riken recently received extra funding to again enable full operation of Fugaku, but Matsuoka said that the measures Riken had put in place during the shortage continued to enable a 15-20% energy savings relative to pre-crisis conditions without an impact on the users’ workload times.
“Sometimes,” he said, “it is important to suffer a crisis.”