Visit additional Tabor Communication Publications
January 10, 2012
BLACKSBURG, Va., Jan. 5 – Virginia Tech crashed the supercomputing arena in 2003 with System X, a machine that placed the university among the world’s top computational research facilities. Now comes HokieSpeed, a new supercomputer that is up to 22 times faster and yet a quarter of the size of X, boasting a single-precision peak of 455 teraflops, or 455 trillion operations per second, and a double-precision peak of 240 teraflops, or 240 trillion operations per second.
That’s enough computational capability to place HokieSpeed at No. 96 on the most recent Top500 List, the industry-standard ranking of the world’s 500 fastest supercomputers. More intriguing is HokieSpeed’s energy efficiency, which ranks it at No. 11 in the world on the November 2011 Green500 List, a compilation of supercomputers that excel at using less energy to do more. On the Green500 List, HokieSpeed is the highest-ranked commodity supercomputer in the United States.
Located at Virginia Tech’s Corporate Research Center, HokieSpeed – the word “Hokie” originating from an old Virginia Tech sports cheer – contains 209 nodes, or separate computers, connected to one another in and across large metal racks, each roughly 6.5 feet tall, to create a single supercomputer that occupies half a row of racks in a vast university computer machine room. X took three times the rack space.
Each HokieSpeed node contains two 2.40-gigahertz Intel Xeon E5645 6-core central processing units, commonly called CPUs, and two NVIDIA M2050/C2050 448-core graphics processor units, or GPUs, which reside on a Supermicro 2026GT0TRF motherboard. That gives HokieSpeed more than 2,500 central processing unit cores and more than 185,000 graphics processor unit cores to compute with.
“HokieSpeed is a versatile heterogeneous supercomputing instrument, where each compute node consists of energy-efficient central-processing units and high-end graphics-processing units,” said Wu Feng, associate professor with the Virginia Tech College of Engineering’s computer science and electrical and computer engineering departments.
“This instrument will empower faculty, students, and staff across disciplines to tackle problems previously viewed as intractable or that required heroic efforts and signiﬁcant domain-speciﬁc expertise to solve.”
Still in the final stages of acceptance testing, Feng envisions HokieSpeed as Virginia Tech’s next war horse in research. As researchers from around the world have used X to crack riddles of the blood system and further DNA research, Feng said HokieSpeed will be a next-generation research tool for engineers, scientists, and others.
HokieSpeed was built for $1.4 million, a small fraction -- one-tenth of a percent of the cost -- of the Top500’s current No. 1 supercomputer, the K Computer from Japan. The majority of funding for HokieSpeed came from a $2 million National Science Foundation Major Research Instrumentation grant. With federal and state budget crunches here to stay, Feng said HokieSpeed carries another plus: It can attract more international research projects to Virginia Tech, adding more to the College of Engineering’s income.
Among the vendors working with Feng on HokieSpeed are Seneca Data Inc. and Super Micro Computer Inc., who were the driving force behind the project, as well as NVIDIA Corp., for their technical support. Feng has worked with NVIDIA before, with the Silicon Valley-based technology firm naming Virginia Tech as a research center and the NVIDIA Foundation’s first worldwide research award for computing the cure for cancer being awarded to Feng.
In addition to HokieSpeed’s compute nodes, a visualization wall – eight 46-inch, 3-D Samsung high-definition flat-screen televisions – will provide a 14-foot wide by 4-foot tall display for end-users to be immersed in their data. Still under construction, the visualization wall will be hooked-up to special visualization nodes built into HokieSpeed and allow researchers to perform in-situ visualization.
This way, researchers can see in real-time if their computational experiment is turning out as expected, or if corrections or on-the-fly adjustments must be made, said Feng. Previously, weeks could pass by before all the data from a computational experiment was generated and then rendered as a video for viewing and analysis.
“What we want to do with HokieSpeed is to enable scientists to routinely do ‘what-if’ scenarios that they would not have been able to do or think of doing in the past,” Feng said. “It will facilitate the discovery process or ‘accelerate the time to discovery.’”
For now, high-tech universities, government research labs, and major corporations use supercomputers on a regular basis, major organizations from the MIT to the Pentagon to Hollywood movie companies. As supercomputers such as HokieSpeed grow in brain size and diversity, and yet shrink in space, they will become more readily available to the public at large, said Feng. That is his ultimate goal.
“Look at what Apple has done with the smartphone and iPad. They have taken general-purpose computing and commoditized it and made it easy to use for the masses,” said Feng. “The next frontier is to take high-performance computing, in particular supercomputers such as HokieSpeed, and personalize it for the masses.”
Such access to supercomputers could help small businesses that do not have multi-billion-dollar budgets for cyberinfrastructure, to better design their products or the process in which their products are produced on the assembly line in the factory. Scientists at smaller universities could use supercomputers for their own research efforts.
“The possibilities are endless as we invent the future at Virginia Tech,” said Feng.
The College of Engineering at Virginia Tech is internationally recognized for its excellence in 14 engineering disciplines and computer science. The college's 6,000 undergraduates benefit from an innovative curriculum that provides a "hands-on, minds-on" approach to engineering education, complementing classroom instruction with two unique design-and-build facilities and a strong Cooperative Education Program. With more than 50 research centers and numerous laboratories, the college offers its 2,000 graduate students opportunities in advanced fields of study such as biomedical engineering, state-of-the-art microelectronics, and nanotechnology. Virginia Tech, the most comprehensive university in Virginia, is dedicated to quality, innovation, and results to the commonwealth, the nation, and the world.
Source: Virginia Tech
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.