Cray Addresses Academic HPC Needs with Roundtable
Universities have been at the forefront of high performance computing for decades, and many of the world’s largest supercomputers run at academic institutions. But when it comes to HPC and academia, there is no one-size-fits-all solution, representatives from supercomputer maker Cray said at a recent roundtable.
Cray hosted the roundtable discussion on HPC needs in academia at the Extreme Science and Engineering Discovery Environment (XSEDE) conference that was held last month in San Diego, California. The invitation-only event was well attended by representatives from colleges, universities and supercomputing labs sponsored by academic institutions, David Barkai, Cray’s head of business development in higher education, wrote in a recent blog post
One of the speakers at the roundtable was Michael Norman, Director of the San Diego Supercomputing Center at San Diego State University. Norman talked about how Gordon, a 341 teraflop Cray CS300-AC system, is being used to meet faculty needs and enable research in a variety of scientific fields, according to Barkai.
Also speaking at the event was Byoung-Do Kim, Deputy Director of HPC and advanced research computing at Virginia Tech. Kim talked about the university’s efforts to make supercomputing more accessible to the masses.
Big Red II, a new Cray supercomputer recently installed at Indiana University, was discussed by Craig Stewart, an executive director from IU. Big Red II is a one-petaflop XE6/XK7 machine that will be used to conduct research in the fields of medicine, engineering, life sciences, physical sciences, social sciences, climate research, and the humanities.
The diverse types of workloads that SDSC’s Gordon and UI’s Big Red II run are not uncommon in academic settings, and Cray can address these different needs with different types of systems, Barkai wrote in his blog.
“Academic institutions can fine-tune their high-performance computing investment to meet their precise needs and fit their budget. If a small-scale teraflop system is the right fit, Cray can do that. If an organization wants to push performance barriers and reach petaflop performance capabilities, that is also an option,” he wrote.
For example, academic institutions that demand flexibility and adaptive architectures may choose Cray’s XC30 or XK7 systems, while those that want sheer power and throughput for data-intensive research may opt for Cray’s CS300-LC or CS300-AC clusters. Cray also offers the Urika graph analytics appliance, which is designed to tackle big data problems.
The roundtable also featured a lively panel discussion, which Barkai hosted and which also included representatives from the Texas Advanced Computing Center (TACC), the National Center for Supercomputing Applications (NCSA), the University of Colorado, the National Science Foundation (NSF), and the University of Tennessee/National Institute for Computational Sciences (UT/NICS).
The panel discussed “the transformative nature” of big data and data-intensive computing in academia; the complexity of heterogeneous computing concepts; and the unique challenges of small institutions beginning HPC programs. “We also spent a long time talking about the growing need for newer programming languages, something that Cray is working to address through its Chapel initiative,” Barkai wrote.