Since 1986 - Covering the Fastest Computers in the World and the People Who Run Them

Language Flags
April 4, 2011

NC State Completes Crashtest Cluster

Tiffany Trader

Considering the cost of high-end supercomputers, the ones found at top academic institutions and national labs, can run into many millions of dollars, it’s only natural to want to treat these valuable resources as gingerly as possible. Administators are prudent to enact conservative permissions schemes and only allow well-vetted applications to be run on these machines. But such protective measures, while understandable and even commendable, could stifle the kind of innovation and tinkering that often leads to better code and time-saving efficiencies. That’s where the idea of a test system comes into play. A computational testbed allows developers and researchers to try out new software ideas on smaller-scale, and less-expensive machinery. This type of resource is implemented with the idea that it’s ok to break.

The testbed cluster recently completed at North Carolina State University was the brainchild of Frank Mueller, a computer science professor at NC State. Seeing the need for a more flexible supercomputer, he decided to create his own. Mueller’s team completed work on the ARC cluster March 30. ARC stands for “A Root Cluster” as the computational infrastructure will primarily support research into scalability for system-level software solutions. This will involve making changes to the cluster’s entire software stack, including the operating system. Once Mueller and his team are able to demonstrate the worthiness of a solution, then they can implement it on a big-name system — like the Jaguar supercomputer at Oak Ridge National Labs.

In an item from The Abstract blog, the official blog of the NC State Newsroom, Mueller is quoted as saying:

“We can do anything we want with it. We can experiment with potential solutions to major problems, and we don’t have to worry about delaying work being done on the large-scale systems at other institutions.”

Today’s generation of supercomputers experience failures on average a couple times a day, translating into hours of lost work. But the coming class of exaflop-level machines are anticipated to exhibit one-billion-way parallelism. The implication of all those cores is an exponential increase in the number of failures. That is why it’s so important to increase hardware reliability or make systems more error-tolerant. Being able to try out theories on a crash-test cluster will help accomplish these goals.

The ARC cluster was made possible by a $549,999 NSF grant, with additional support coming from NVIDIA and NC State. With 1,728 processor cores and 36 NVIDIA Tesla C2050 GPUs on 108 computer nodes (32GB RAM each), it is now the largest academic HPC system in North Carolina.

Full story at NCSU Abstract Blog

SC14 Virtual Booth Tours

AMD SC14 video AMD Virtual Booth Tour @ SC14
Click to Play Video
Cray SC14 video Cray Virtual Booth Tour @ SC14
Click to Play Video
Datasite SC14 video DataSite and RedLine @ SC14
Click to Play Video
HP SC14 video HP Virtual Booth Tour @ SC14
Click to Play Video
IBM DCS3860 and Elastic Storage @ SC14 video IBM DCS3860 and Elastic Storage @ SC14
Click to Play Video
IBM Flash Storage
@ SC14 video IBM Flash Storage @ SC14  
Click to Play Video
IBM Platform @ SC14 video IBM Platform @ SC14
Click to Play Video
IBM Power Big Data SC14 video IBM Power Big Data @ SC14
Click to Play Video
Intel SC14 video Intel Virtual Booth Tour @ SC14
Click to Play Video
Lenovo SC14 video Lenovo Virtual Booth Tour @ SC14
Click to Play Video
Mellanox SC14 video Mellanox Virtual Booth Tour @ SC14
Click to Play Video
Panasas SC14 video Panasas Virtual Booth Tour @ SC14
Click to Play Video
Quanta SC14 video Quanta Virtual Booth Tour @ SC14
Click to Play Video
Seagate SC14 video Seagate Virtual Booth Tour @ SC14
Click to Play Video
Supermicro SC14 video Supermicro Virtual Booth Tour @ SC14
Click to Play Video