Visit additional Tabor Communication Publications
July 25, 2012
The National Science Foundation’s Extreme Science and Engineering Discovery Environment (XSEDE) program has made significant progress in its inaugural year that will pave the way for more advances in year two, all of them aimed at being implemented without major disruption to the research community.
“We have transitioned from a start-up mode to providing a regular delivery of value,” John Towns, principal investigator and project director of XSEDE, told attendees at XSEDE12, the program’s first annual conference since the five-year, $121 million project formally succeeded the NSF’s previous program, TeraGrid, 12 months ago.
“XSEDE is becoming much more comprehensive in the number and type of resources and services we provide, and you’ll see an expansion of that over the coming years,” Towns told attendees as the conference opened in Chicago. “But fundamentally, we’re still about increasing the productivity of the community on conducting research, engineering and education.”
Towns, who is also the senior associate director of cyberinfrastructure programs at the National Center for Supercomputing Applications (NCSA), said the focus has changed somewhat from the TeraGrid program, which was primarily about the delivery of HPC resources to advanced research teams to further their science.
“Many good things were done as a result of that focus – this was not a bad thing at all,” Towns said, adding that there now are numerous opportunities under the XSEDE program to provide a much more expanded scope so the organization can ultimately have a greater impact across a broader range of disciplines while offering more user services. The five-year NSF award for XSEDE includes an option to extend for another five years, pending results of a major review in the third year of the current award.
XSEDE’s first year included what Towns called a lot of “behind-the-scenes” work that was for the most part transparent to the user community. This included a complete change in the network infrastructure, a stronger emphasis and redefinition of its Advanced User Support operations, and the enhancement of the XSEDE User Portal as a solid interface to the community.
About three quarters of the way into its first project year, Towns said that XSEDE supported almost 2,000 research projects and 9,000 users across 32 NSF divisions with 1,800 publications – all without any major disruptions to users. “That was and remains one of our top priorities,” he said.
The XSEDE leadership is now focusing on providing solutions that are designed from the outset to evolve with the needs of researchers over a longer period of time. “That is one of our challenges with this project: how do we smoothly evolve the services, the architecture, the support and functions that match the new technologies, the new needs of our existing researchers and the needs of new communities that we’ll start serving,” he said.
Going forward, Towns said there is a wide range of other initiatives in the works or being planned, including:
Now that XSEDE has a stable core of capabilities, new capabilities and new services will be layered on top of what is already provided as the organization delivers new initiatives faster and faster, according to Towns.
“We’re at a slow jog right now, and over the next year I hope we get up to a good running pace,” he said.”
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
Supercomputing veteran, Bo Ewald, has been neck-deep in bleeding edge system development since his twelve-year stint at Cray Research back in the mid-1980s, which was followed by his tenure at large organizations like SGI and startups, including Scale Eight Corporation and Linux Networx. He has put his weight behind quantum company....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.