Visit additional Tabor Communication Publications
July 25, 2012
The National Science Foundation’s Extreme Science and Engineering Discovery Environment (XSEDE) program has made significant progress in its inaugural year that will pave the way for more advances in year two, all of them aimed at being implemented without major disruption to the research community.
“We have transitioned from a start-up mode to providing a regular delivery of value,” John Towns, principal investigator and project director of XSEDE, told attendees at XSEDE12, the program’s first annual conference since the five-year, $121 million project formally succeeded the NSF’s previous program, TeraGrid, 12 months ago.
“XSEDE is becoming much more comprehensive in the number and type of resources and services we provide, and you’ll see an expansion of that over the coming years,” Towns told attendees as the conference opened in Chicago. “But fundamentally, we’re still about increasing the productivity of the community on conducting research, engineering and education.”
Towns, who is also the senior associate director of cyberinfrastructure programs at the National Center for Supercomputing Applications (NCSA), said the focus has changed somewhat from the TeraGrid program, which was primarily about the delivery of HPC resources to advanced research teams to further their science.
“Many good things were done as a result of that focus – this was not a bad thing at all,” Towns said, adding that there now are numerous opportunities under the XSEDE program to provide a much more expanded scope so the organization can ultimately have a greater impact across a broader range of disciplines while offering more user services. The five-year NSF award for XSEDE includes an option to extend for another five years, pending results of a major review in the third year of the current award.
XSEDE’s first year included what Towns called a lot of “behind-the-scenes” work that was for the most part transparent to the user community. This included a complete change in the network infrastructure, a stronger emphasis and redefinition of its Advanced User Support operations, and the enhancement of the XSEDE User Portal as a solid interface to the community.
About three quarters of the way into its first project year, Towns said that XSEDE supported almost 2,000 research projects and 9,000 users across 32 NSF divisions with 1,800 publications – all without any major disruptions to users. “That was and remains one of our top priorities,” he said.
The XSEDE leadership is now focusing on providing solutions that are designed from the outset to evolve with the needs of researchers over a longer period of time. “That is one of our challenges with this project: how do we smoothly evolve the services, the architecture, the support and functions that match the new technologies, the new needs of our existing researchers and the needs of new communities that we’ll start serving,” he said.
Going forward, Towns said there is a wide range of other initiatives in the works or being planned, including:
Now that XSEDE has a stable core of capabilities, new capabilities and new services will be layered on top of what is already provided as the organization delivers new initiatives faster and faster, according to Towns.
“We’re at a slow jog right now, and over the next year I hope we get up to a good running pace,” he said.”
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.