Arctic Region Supercomputing Center Gets Cold Shoulder from DoD
At a time when supercomputing centers seem to be multiplying across the US, the one up in Alaska looks like it could become an endangered species. The Arctic Region Supercomputing Center (ARSC) is slated to lose its Department of Defense (DoD) funding at the end of May 2011, putting the jobs of nearly 50 employees in jeopardy and shrinking the scope of the work done at the northernmost HPC facility in the United States.
Fairbanks-based ARSC is a dual-purpose supercomputing center, serving researchers at the University of Alaska-Fairbanks (UAF) as well as the DoD’s High Performance Computing Modernization Program (HPCMP). This two-pronged mission has been in effect since the center was inaugurated in 1993, and has given the university access to some world-class supercomputing machinery.
ARSC is currently one of six HPCMP centers, the others being the Army Research Laboratory DSRC at Aberdeen Providing Ground, in Maryland; the Air Force Research Laboratory DSRC at Wright Patterson AFB, in Ohio; the Maui High Performance Computer Center in Kihei, Maui, Hawaii, the Army Engineer Research and Development Center DSRC in Vicksburg, in Mississippi and the Navy DoD Supercomputing Resource Center at Stennis Space Center, also in Mississippi.
A pre-Thanksgiving email to ARSC confirmed what many at the center had suspected, namely that the center would lose its DoD funding after the current money expires next May. Today the center is funded to the tune of $12 to $15 million, and the DoD slice represents around 95 percent of the total.
According to ARSC director Frank Williams, they’ve been looking to move the UAF academic work off the DoD HPC platforms for the past couple of years, and that process is now complete. “We came of age just in time,” he told HPCwire.
That academic work was transferred to the recently deployed “pacman” system, an AMD Opteron-based HPC cluster from Penguin Computing. Funding for this system came from a number of NSF grants (one of which was named Pacific Area Climate Monitoring and Analysis Network, or PACMAN for short). The machine was procured explicitly for the academic users at UAF, and is being used to support a range of Arctic-oriented scientific research, including studies of climate change, ocean circulation, permafrost, tsunamis, and regional weather patterns.
The pacman system is actually the synthesis of three separate procurements, which were subsequently consolidated into a single cluster. The combined machine encompasses more than 2,000 CPU cores made up of AMD’s latest Magny-Cours Opterons. There are also a couple of NVIDIA Fermi GPU-equipped nodes on pacman, which plays into the university’s research with GPGPU computing. The UAF researchers are happy to have a recent vintage machine devoted entirely to their work. “It’s pretty skookum,” said Williams, employing the Alaskan slang for something really cool or excellent.
Some maintenance and operational support for pacman was included with the original NSF funding, and the center is now working with the university to augment that beyond the end of the DoD money. In fact, UAF university is on the hook to pick up the entire operational budget of the datacenter, something the university is prepared to do, according to Williams.
The hard part will be to figure out a way to transition the people that were dependent on DoD work to the academic side. Given that most of the 50 or so ARSC employees were being funded out of the HPCMP money, that will be quite a challenge.
As far as DoD systems themselves, the majority being housed at ARSC are smaller test and development systems. The center’s main production machine is Chugach, a Cray XE6 ‘Baker’ supercomputer, which was part of a recent big procurement under HPCMP. That system has been moved to Vicksburg center and is being run remotely. Chugach’s predecessor, an XT5 machine named Pingo, is also in production, but as soon as Chugach completes acceptance testing (which is imminent), Pingo will be retired.
Starting next June, ARSC will be forced into the more traditional path of a university-based HPC center, using mainly NSF and local funding to keep its systems up and running. Williams is glad to see the UAF administration stepping up to fill some of the void left by the DoD’s exit, but it remains to be seen how smooth that transition is going to be. “We really have the hardware to support academic high performance computing research,” says Williams. Now it’s just a matter of making sure we can find a way to have enough staff to support it. We don’t aspire to be a $15 million academic center at this point.”