Visit additional Tabor Communication Publications
October 21, 2009
ATLANTA, Oct. 21 -- The Georgia Institute of Technology today announced its receipt of a five-year, $12 million Track 2 award from the National Science Foundation's (NSF) Office of Cyberinfrastructure to lead a partnership of academic, industry and government experts in the development and deployment of an innovative and experimental high-performance computing (HPC) system.
The award provides for the creation of two heterogeneous, HPC systems that will expand the range of research projects that scientists and engineers can tackle, including computational biology, combustion, materials science, and massive visual analytics. The project brings together leading expertise and technology resources from Georgia Tech's College of Computing, Oak Ridge National Laboratory (ORNL), University of Tennessee, National Institute for Computational Sciences, HP and NVIDIA.
NSF's Track 2 program is an activity designed to fund the deployment and operation of several leading-edge computing systems operating at or near the petascale. An underlying goal is to advance US computing capability in order to support computational scientists and engineers in the pursuit of scientific discovery. The award announced today is the part of the fourth round of awards in the Track 2 program.
"Our goal is to develop and deploy a novel, next-generation system for the computational science community that demonstrates unprecedented performance on computational science and data-intensive applications, while also addressing the new challenges of energy-efficiency," said Jeffrey Vetter, joint professor of computational science and engineering at Georgia Tech and Oak Ridge National Laboratory.
"The user community is very excited about this strategy," Vetter continued. For example, James Phillips, senior research programmer at the University of Illinois who leads development of the widely-used NAMD application, says "Our experiences with graphics processors over the past two years have been very positive and we can't wait to explore the new Fermi architecture; this new NSF resource will provide an ideal platform for our large biomolecular simulations."
Georgia Tech's Vetter will lead the five-year project as principal investigator. The project team is comprised of luminaries in the HPC field, including a Gordon Bell Prize winner and previous recipients of the NSF Track 2B award. Co-principal investigators on the project are Prof. Jack Dongarra (University of Tennessee and ORNL), Prof. Karsten Schwan (Georgia Tech), Prof. Richard Fujimoto (Georgia Tech), and Prof. Thomas Schulthess (Swiss National Supercomputing Centre and ORNL).
The platforms will be developed and deployed in two phases, with initial system delivery planned for deployment in early 2010. This system's innovations in performance and power will be achieved through heterogeneous processing based on widely-available NVIDIA graphics processing units (GPUs). As industry partners, HP and NVIDIA will be providing the computational systems, platforms and processors needed to develop the system.
"Research institutions are looking for energy-efficient, high-performance computing architectures that can speed time to solution," said Ed Turkel, manager of business development in the Scalable Computing and Infrastructure business unit at HP. "The combination of HP's industry-standard HPC server technology with NVIDIA processors delivers increased performance and faster application development, accelerating higher education research projects."
The initial system will pair hundreds of HP high-performance Intel processors with NVIDIA's new next-generation CUDA architecture, codenamed Fermi, designed specifically for high-performance computing. This project will be the first of the Track 2 awards to realize the vast potential of GPUs for HPC.
"Computational science is a key area driving the worldwide application of GPUs for high-performance computing," said Bill Dally, chief scientist at NVIDIA. "GPUs working in concert with CPUs is the architecture of choice for future demanding applications."
A critical component of the program is a focus on education, outreach and training to expand the knowledge and understanding of HPC among a broader audience. The Georgia Tech team will conduct workshops to attract and train new users for the system, engage historically underrepresented groups such as women and minorities, and educate future generations on the vast potential of high-performance computing as a career field.
More information on the project and its resources is available at http://keeneland.gatech.edu.
About the Georgia Institute of Technology
The Georgia Institute of Technology is one of the nation's premier research universities. Ranked seventh among U.S. News & World Report's top public universities, Georgia Tech's more than 19,000 students are enrolled in its Colleges of Architecture, Computing, Engineering, Liberal Arts, Management and Sciences. Tech is among the nation's top producers of women and African-American engineers. The Institute offers research opportunities to both undergraduate and graduate students and is home to more than 100 interdisciplinary units plus the Georgia Tech Research Institute.
Source: The Georgia Institute of Technology
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.