Visit additional Tabor Communication Publications
March 30, 2007
In the past few years, the recognition that our standard of living is tied to economic competitiveness has become a popular cultural theme. With the publication of Thomas Friedman's book "The World is Flat" and the National Academy of Sciences study "Rising Above The Gathering Storm," there seems to be a growing sentiment that U.S. economic competitiveness is at risk, and that this competitiveness is related to our ability to maintain leadership in science and technology. The media has picked up on this theme and has helped to bring the issue of technological competitiveness into the public consciousness.
Threats to our standard of living usually get the attention of the electorate. So it's no surprise that this topic has also permeated the political arena and led to the introduction of President Bush's American Competitive Initiative (ACI) in his 2006 State of the Union Address. ACI calls for the doubling of funding for three federal agencies (NSF, DOE and NIST), ramping up science education, increasing the technical workforce, and making the R&D tax credit permanent. Even before the President's ACI proposal, Nancy Pelosi in 2005 had unveiled an Innovation Agenda that targeted some of the same issues.
In this climate of heightened interest for technology support, high performance computing is enjoying its moment in the Congressional spotlight. On March 12, the High Performance Computing R&D Act (H.R. 1068) passed the House. The history of this bill spans 16 years. Its precursor, the High Performance Computing and Communications (HPCC) Act, was the one that Al ("Inconvenient Truth") Gore helped introduce back in 1991. That legislation created the Networking and Information Technology Research and Development (NITRD) program of the federal government, which oversees national computing technology research programs. The 1991 legislation was ammended several years later, but was never substantially changed. It still contains references to technology and activities that are no longer relevant.
A 2006 version of the HPC R&D bill passed the House last year, but died in the Senate. It looked like Senator Maria Cantwell (D-WA) was prepared to introduce it as an amendment to a commerce bill. But with Cantwell up for reelection in November, political partisanship took hold. No Republican was willing to co-sponsor the bill, despite the fact that the HPC bill had bipartisan support. Since Cantwell was reelected, this should no longer be an issue.
HPC R&D Act II
The 2007 HPC R&D House bill is now headed for the Senate, with some significant changes. Specifically, it adds a roadmapping process for the provision of federal high performance computing infrastructure. The proposed legislation authorizes the Director of the White House's Office of Science and Technology Policy (OSTP) to develop and oversee the roadmap for federal HPC systems. Currently this is done by an interagency working group (part of NITRD) that is supposed to coordinate the IT activities of the different federal agencies. Apparently the process didn't lead to much coordination. In the revised bill, the Director must lay out the R&D and deployment timetables for all federal HPC assets.
"This is something the community has been asking for for awhile," said Peter Harsha, Director of Public Affairs for the Computing Research Association (CRA). "It was part of the PITAC (President's Information Technology Advisory Committee) recommendations from two years ago. The roadmapping is an attempt to provide some structure to the planning process -- to allow the agencies to think more strategically over the long term."
The proposed legislation also directs the President's IT advisory committee to establish the goals and funding levels of the NITRD program, review the progress of the program, and report the program's status to Congress every two years. The report includes recommendations for modifying agency HPC funding levels as appropriate.
According to Harsha, the last time there was a really thorough review of the NITRD program was back in 1999, during the Clinton Administration. At that time, PITAC released a report that specified fundamental R&D goals. It concluded that the U.S. was significantly underinvested in IT research and that more federal research and development needed to occur. It recommended specific funding levels for the next five years for the different R&D areas. Those recommendations resulted in a large ramp up of federal agency funding, particularly for the NSF. In addition, the roadmap language in the PITAC report was reflected in the new HPC R&D bill.
The hope is that a roadmap will enable the feds to focus on the big issue facing HPC, namely, matching computer hardware and software to achieve the levels of performance (and dare I say, productivity) necessary for big scientific workloads. This will require an investment in software technology, memory architectures, processor architectures and system interconnects.
Dan Reed, who chaired PITAC and was instrumental in influencing the HPC R&D language for the roadmapping provisions, believes that the roadmap process is an important step forward for managing our high performance computing priorities at the national level.
"The real issue is how do we get the agencies to work better together so that we get technology transfer across the various R&D programs and develop a clear strategy for advancing the next-generation architectures, applications, software tools and data systems, in concert," said Reed, "as opposed to just being focused on big iron acquisition."
Something for everyone
Why does HPC get such special treatment from government? As readers of this publication are aware, HPC is a critical enabler of basic research in biomedicine, the physical sciences, national security and weapons development, all of which are of great interest to the U.S. government. The agencies that need this technology, like the DOE, DoD and NASA, use HPC as a means to an end, but are not devoted to advancing HPC for its own sake. According to Reed, one of reasons this bill is important is that it's a focusing mechanism for HPC research and development across disparate agency missions.
While the HPC R&D Act is part of the competitiveness bandwagon -- and presumably will be funded, in part, by the appropriations that result from the Senate version of the ACI legislation -- the bill extends beyond these broader research initiatives that are primarily aimed at the academic community. The HPC bill is a cross-cut initiative that applies to many federal agencies. Some agencies, such as the DOE, NASA and NSF, are interested in physical sciences research. But the bill also applies to agencies whose focus is outside the realm of academic research, for example, the NSA and NNSA. Those agencies are concerned with national security and defense, two areas with even more political visibility than competitiveness.
Because of the broad support of the legislation from both sides of the aisle, Harsha believes the bill will have a much easier time in the Senate this time around. That sentiment is echoed by Reed, who told me he was confident the Senate would pass a version of the HPC bill that is close to, if not identical to, the House version. No one wants to make any predictions when a Senate version will be introduced, much less passed. But since technology has become such a "Mom and Apple Pie" issue (stem cells notwithstanding), the bill should find plenty of Senators willing to wrap their arms around it. With both Republicans and Democrats on board, they should be able to send the HPC legislation to the President before the end of 2007. It will be the easiest thing he signs all year.
As always, comments about HPCwire are welcomed and encouraged. Write to me, Michael Feldman, at firstname.lastname@example.org.
Posted by Michael Feldman - March 29, 2007 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
The Xeon Phi coprocessor might be the new kid on the high performance block, but out of all first-rate kickers of the Intel tires, the Texas Advanced Computing Center (TACC) got the first real jab with its new top ten Stampede system.We talk with the center's Karl Schultz about the challenges of programming for Phi--but more specifically, the optimization...
Although Horst Simon was named Deputy Director of Lawrence Berkeley National Laboratory, he maintains his strong ties to the scientific computing community as an editor of the TOP500 list and as an invited speaker at conferences.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
May 09, 2013 |
The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system...
May 08, 2013 |
For engineers looking to leverage high-performance computing, the accessibility of a cloud-based approach is a powerful draw, but there are costs that may not be readily apparent.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.