Visit additional Tabor Communication Publications
August 30, 2010
AUSTIN, Texas, Aug. 30 -- The National Science Foundation (NSF), The University of Texas at Austin, and multiple partners have committed $9 million to the Texas Advanced Computing Center (TACC) to acquire a new Lonestar system that is expected to support more than 1,000 research projects in science and engineering over three years.
William Powers Jr., president of The University of Texas at Austin, said, "We thank the National Science Foundation for supporting UT's high-performance computing (HPC) system, enabling us and the national research community to conduct transformational science. As we did with the Ranger supercomputer, we want to make Lonestar a showcase system for researchers in Texas and throughout the world."
TACC, in partnership with Dell, Intel, Mellanox Technologies and DataDirect Networks, will deploy an HPC system designed for achieving excellent performance on the workload of applications running on the NSF TeraGrid. The new Lonestar system will replace the current Lonestar, which has served as one of the most productive platforms in the TeraGrid for more almost four years, and will offer greater capabilities over the current system, including:
The computational building blocks of the system will be a total of 1,888 Dell M610 PowerEdge blade servers, each with two six-core Intel Xeon 5600 "Westmere" processors. This will provide nearly 200 million CPU hours per year. DataDirect Networks will provide the high-speed disk storage and a Mellanox 40Gb/s InfiniBand network will integrate all of these components to enable tremendous performance on a wide range of applications.
"We surveyed the TeraGrid user and resources landscape and determined the greatest need is for more high-end HPC system capacity with better delivered performance," said TACC Director Jay Boisseau. "We evaluated the potential for real impact on scientific applications in terms of total sustained performance, scalability and total number of cycles. We believe the new Lonestar will become the system of choice for researchers with codes that are either memory bandwidth or interconnect bandwidth bound--which is true for many simulation-based applications."
Once deployed, Lonestar will be the third largest system in the TeraGrid and should rank among the most powerful academic supercomputers in the world. TACC also maintains Ranger, the second largest system in the TeraGrid. Lonestar will be made available for a small number of users in December 2010 and for general use by TeraGrid allocations early in 2011.
While Lonestar will support a national audience of scientists, two-thirds of the funding will advance research at leading institutions and centers working on research funded by the NSF and other federal funding agencies. The University of Texas at Austin, Texas A&M University, Texas Tech University and several research groups, including UT's Institute for Computational Engineering and Sciences, are also contributing to this project.
"Texas A&M is excited to be part of this significant collaboration," said R. Bowen Loftin, president of Texas A&M University. "Advancing fundamental research in computational science and engineering is of vital importance to our researchers at Texas A&M. We're pleased that our faculty will benefit from the Texas Advanced Computing Center and the collaboration with UT Austin in HPC systems. We look forward to Lonestar's deployment for the benefit of science and discovery here in the state of Texas."
Guy Bailey, president of Texas Tech University, said, "At Texas Tech, we're aggressively building our research infrastructure and access to resources to provide our faculty and students with new opportunities for research and education. This partnership with UT Austin and TACC will help us ensure that our researchers have access to the best computational technology available."
In addition to the open science community, TACC is committed to making a significant impact on the integration of technology and research in industry through its Science & Technology Affiliates for Research program. Lonestar will be a platform for new industry partnerships developing parallel applications with a special focus on energy sector companies.
About the TeraGrid
The TeraGrid, sponsored by the NSF Office of Cyberinfrastructure, is a partnership of people, resources and services that enables discovery in US science and engineering. Through coordinated policy, grid software and high-performance network connections, TeraGrid integrates a distributed set of high-capability computational, data-management and visualization resources to make research more productive. TeraGrid resources include more than two petaflops of combined computing capability and more than 50 petabytes of online and archival data storage from 11 resource provider sites across the nation.
Source: Texas Advanced Computing Center
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.