Visit additional Tabor Communication Publications
November 18, 2008
BLACKSBURG, Va., Nov. 18 -- Five years ago, Virginia Tech burst onto the high-performance computing scene using Apple Power Mac G5 computers to build System X, one of the fastest supercomputers of its time. Today, Srinidhi Varadarajan and Kirk W. Cameron of Virginia Tech's Center for High-End Computing Systems (CHECS) and professors of computer science in Virginia Tech's College of Engineering (http://www.eng.vt.edu) have architected a new supercomputer.
This time, while the new System G supercomputer is twice as fast as its predecessor, their primary goal was to demonstrate that supercomputers can be both fast and a more environmentally green technology.
System G clocks in at an incredible 22.8 TFlops (or trillion operations per second). And keeping with tradition, though bid under a competitive contract, the machine consists of 325 Mac Pro computers; each with two 4-core 2.8 gigahertz (GHz) Intel Xeon processors and eight gigabytes (GB) random access memory (RAM). "However, the novelty of this machine does not end there," Varadarajan said.
They will discuss System G at the SuperComputing08 conference at the Austin Convention Center that is being held this week.
Most high-performance computing systems research is conducted at small scales of 32, 64, or at most 128 nodes. Larger machines are typically used in production mode where experimental software is anathema to the end user focused on solving fundamental problems in computational science and engineering. System G was sponsored in part by the National Science Foundation and CHECS to address the gap in scale between research and production machines. The purpose of System G is to provide a research platform for the development of high-performance software tools and applications with extreme efficiency at scale.
"Given our research strengths at the Center for High-End Computing Systems, we were able to partner with Mellanox to create the first supercomputer running over quad data rate (QDR) InfiniBand (40Gbs) interconnect technology. The low latency and high bandwidth characteristics of QDR InfiniBand enable new research in transparent distributed shared memory systems that focus on usability of cluster supercomputers," said Varadarajan, director of CHECS. In preliminary tests, System G was able to obtain transfer rates of over three gigabytes per second with small message latencies close to one microsecond.
Given these state-of-the-art communication rates (e.g., data sets consisting of nearly one billion numbers traveling between any two compute nodes in one second, with the first value arriving in one-millionth of a second), supercomputer systems and applications requiring unprecedented levels of data movement can be considered.
But, what makes System G so green? "We set out to design the fastest supercomputer with advanced power management capabilities such as power-aware CPUs, disks, and memory. Our partnership with Apple ensured the most advanced network of power and thermal sensors ever assembled in this type of machine," commented Cameron, an expert on green computing. According to Cameron, System G has thousands of power and thermal sensors. As the world's largest power-aware cluster, System G will allow CHECS researchers to design and develop algorithms and systems software that achieve high-performance with modest power requirements, and to test such systems at unprecedented scale.
"We are pleased to have Mellanox 40Gb/s end-to-end InfiniBand adapters and switches be the foundation for Virginia Tech's research initiatives on power-aware and green computing, advanced scientific research systems, and future high productivity solutions," said Sash Sunkara, vice president of marketing at Mellanox technologies. "Our advanced interconnect technology is designed to provide world-leading productivity for high-performance computing and enterprise datacenter clustering solutions, providing faster and more efficient research and engineering simulations."
The mission of the CHECS is world-class computer systems research in the service of high-end computing. CHECS faculty (http://www.checs.eng.vt.edu/people.php) work on a broad array of problems and design a wide range of technologies, all with the goal of developing the next generation of powerful and usable high-end computing resources. Their focus is primarily on computer science systems research.
Center members recognize that high-end resources must be powerful in a broad sense (i.e., high-performance, high-capacity, high-throughput, high-reliability, etc.), and at the same time they must be more usable and more energy efficient than current high performance computing (HPC) systems. Toward that end, the center is pursuing a broad research agenda in areas such as processor and memory architectures, operating systems, run-time systems, communication subsystems, fault-tolerance, scheduling and load-balancing, power-aware systems and algorithms, numerical algorithms, and programming models.
The center's goal is to build computing systems and environments that can efficiently and usably span the scales from department-sized machines to national-scale resources. CHECS was established in September 2005 and supported by Virginia Tech's College of Engineering. It currently has 12 tenured/tenure track computer science faculty and 65 masters and Ph.D. students.
About Virginia Tech
Founded in 1872 as a land-grant college, Virginia Tech has grown to rank among the largest universities in the Commonwealth of Virginia. Today, Virginia Tech's eight colleges are dedicated to putting knowledge to work through teaching, research, and outreach activities and to fulfilling its vision to be among the top 30 research universities in the nation. At its 2,600-acre main campus located in Blacksburg and other campus centers in Northern Virginia, Southwest Virginia, Hampton Roads, Richmond, and Roanoke, Virginia Tech enrolls more than 30,000 full- and part-time undergraduate and graduate students from all 50 states and more than 100 countries in 180 academic degree programs.
Source: Virginia Tech
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.