Visit additional Tabor Communication Publications
June 28, 2010
NCSA, TACC, PSC and NICS to offer first dedicated technology insertion service for the "new" TeraGrid
June 28 -- A team of four US advanced computing centers today announced that it has begun work on the National Science Foundation (NSF) eXtreme Digital (XD) Technology Insertion Service (TIS) award, an $8.9 million, five-year project commissioned by the Office of Cyberinfrastructure (OCI) to evaluate and recommend new technologies for high-performance computing systems and other resources as part of the NSF TeraGrid and its follow-on initiative, XD.
The eXtreme Science and Engineering Discovery Environment (XSEDE) TIS team includes the National Center for Supercomputing Applications (NCSA), Texas Advanced Computing Center (TACC), Pittsburgh Supercomputing Center (PSC), and the National Institute for Computational Sciences (NICS). Since 2005, these centers have contributed to open science and research education by hosting some of the largest HPC resources on the TeraGrid and providing expert technical staff.
Barry Schneider, NSF TeraGrid program director, said, "The XD program represents the next step in the development and deployment of an advanced cyberinfrastructure for the US scientific and engineering community. The XD program will be replacing the NSF TeraGrid program which provides management of the NSF high-performance computing facilities. The XSEDE TIS team will be responsible for testing and evaluating the software which will become part of the fabric of the new XD program."
In April 2011, NSF OCI will officially transition from TeraGrid to XD. The XSEDE TIS award is part of the phased transition process that is taking place now and through April 2011. It plays a critical role in XD's overall mission to accelerate open scientific discovery and to enable researchers to conduct transformational science with next-generation, high-end digital services.
John Towns, TeraGrid Forum chair and principal investigator (PI) on the XSEDE TIS grant, said, "The community will see a coordinated effort to evaluate the most promising technologies and make recommendations to integrate the most fitting technologies into the XD cyberinfrastructure. This award will ensure a coherent approach to leverage the rapidly evolving software environment and hardware capabilities that make the integrated, distributed environment of resources and services collectively more powerful."
In addition to sustaining continuous improvement in XD's architecture and services, the XSEDE TIS team will develop and maintain an open, Web-accessible database of technology projects for XD sites and users. This database will enable new opportunities for collaboration, research and development, and outreach. The XSEDE TIS team will also operate the Technology Evaluation Laboratory to ensure that proposed technology changes are thoroughly tested before being recommended for insertion into the production infrastructure.
"The majority of effort with the Track 2 and similar awards goes toward operating the resources and supporting the users of the resources. Now, NSF is taking a more formal, multi-year approach to technology insertion--to tracking the best technologies in the community and making sure they are evaluated for appropriateness, reliability, effectiveness, and usability," said Jay Boisseau, director of TACC and co-PI on the XSEDE TIS grant.
Ralph Roskies, scientific director at PSC and co-PI on the XSEDE TIS grant, said, "The experienced personnel of NCSA, TACC, PSC and NICS have a long history of collaboration. These centers are complemented by the University of Virginia team whose staff brings more than 50 years of combined experience in large scale parallel and distributed systems in support of scientific computing."
"We believe that this award is a step toward a more continuous and successful model for the NSF supercomputing resources. Tracking and inserting new technologies and services will help ensure that the NSF meets its long-term goal of providing the best possible HPC environment for its researchers," said Phil Andrews, project director for NICS and co-PI on the XSEDE TIS grant.
For up-to-date information, including a TeraGrid to XD transition schedule and answers to frequently asked questions, visit http://www.teragrid.org/XDTransition.
Source: Texas Advanced Computing Center
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.