Just moments ago, the University of Florida unleashed a new teraflopper, dubbed HiPerGator, into the wild to chew on some specific research problems in life sciences, weather forecasting and materials science.
The 256-node Dell-built cluster leverages the Opteron C6145 processors, lending it 16,385 cores to climb toward its 157 teraflop peak. The RedHat 6.3-driven system comes fitted with the Dell-Terascala storage architecture, with 2.88 PB of total shared disk (256 local) and 65.5 TB of memory. Mellanox ties it together with its 56 Gbps Infiniband.
We spoke with Dell’s Director of Research Computing, Tim Carroll, today about the purchase. He noted that while there were indeed some conversations about the processor choices, the University of Florida has been a long-time AMD shop, making the Opterons a more natural choice for UF.
While this can be considered a rather “vanilla” little cluster since it’s not decked out with Phi, GPUs, FPGAs or other secret performance sauce, Carroll explains that the issue of FLOPS or Linpack never entered a single conversation during the planning and design process. The university, like many others he works with, is simply hoping to feed an expanding pool of HPC system users—a trend that Carroll says bodes well for the future of the once “out of reach” perceptions of using HPC systems.
Systems like this one are very commonplace, but Carroll says that the future of new architectures, especially those that are in the experimental phases now (think Student Cluster Competition) hold great promise, as do low-power servers that de-emphasize FLOPs in favor of more sustainable, real-world application performance. With that said, the non-tricked out HPC cluster variety like HiPerGator, will still hold a long-standing place at similar institutions as demand for advanced computing rises and technologists and researchers alike can take a step back and “understand what they have” and simply make good use of it for practical science.
In other words, while the press often flocks around the supersized systems (totally guilty), the capacity cluster is the underrated hero of HPC, especially as more users come to the table.
The University of Florida is using this additional capacity to rise to new challenges in key fields that could bring about more research funding and research-driven cost savings results. Among projects slated for the system are efforts to simulate 50 years’ worth of weather patterns in an attempt to save or even expand the state’s $631 million-a-year tomato industry, which accounts for 45 percent of the nation’s fresh market tomatoes.
And don’t forget about the humanities—as the researcher in the video below notes, supercomputing resources can be game-changers in fields that we don’t traditionally associate with super systems.
This has not been a rip and replace of their previous larger system, which was an 11-teraflop Penguin machine as the university built out its capabilities with some serious new space. In addition to formally introducing the system today, UF also opened the doors on its new $15 million, 25,000 square-foot datacenter to house HiPerGator—a big point of pride for the university’s soon-to-retire president, Bernie Machen, who is reportedly thrilled to see a system like this raised under his watch.
“If we expect our researchers to be at the forefront of their fields, we need to make sure they have the most powerful tools available to science, and HiPerGator is one of those tools,” said UF President Bernie Machen. “The computer removes the physical limitations on what scientists and engineers can discover. It frees them to follow their imaginations wherever they lead.”