Visit additional Tabor Communication Publications
January 11, 2008
Research and Infrastructure Funding
As I write this, there is no joy in Mudville (U.S. science funding), as Casey (the university, national laboratory and technology industry community) has struck out in securing a substantive budget increase for the sciences. After the America COMPETES Act authorized major increases in 2007, with strong bipartisan support, we had high hopes for a corresponding appropriation. Alas, the omnibus appropriation bill includes little new money for science.
With a few notable exceptions, research and infrastructure funding will (at best) just keep pace with inflation. This does not bode well for computational and computer science. If you really want to feel depressed, read Norm Augustine's new essay, Is America Falling Off the Flat Earth? This is a successor to the earlier Rising Storm report, and it is very sobering.
What can you do? First, don't whine -- that rarely impresses people in Washington. Rather, continue to make the case, via the venues and organizations where you are a member, that science and computing are critical enablers of economic growth, national innovation and education. Finally, it is especially important that we speak with a unified voice. A cacophony of confused messages will further delay the outcome we seek, for there are more supplicants and deserving ideas than available funding. As one Office of Management and Budget (OMB) examiner once remarked to me, "Rarely do I encounter people who say, 'I'm dumb and I have too much money. Can you help me?'" We are definitely not dumb, and we absolutely have great ideas; we must keep doggedly pushing our message.
Coordinating Strategy and Spending
In addition to seeking new funding, we also face challenges in supporting our existing capabilities. As our HPC systems and software infrastructures have grown, so have our operations and maintenance costs. Gone are the days when a large system was 64 processors and an application was developed by a small research team. Today we are deploying systems with hundreds of thousands of processors and many petabytes of storage, executing software frameworks containing tens to hundreds of millions of lines of code. The research agendas of entire disciplines now depend on the long-term sustenance of this infrastructure. Simply put, computational science has become big science, with correspondingly large staffs and rising power, cooling and capital costs.
The National Science Foundation (NSF) and the NSF Office of Cyberinfrastructure (OCI) are struggling to balance community demands for new investments against infrastructure sustenance. For example, I believe over 80 percent of OCI's budget is committed to extant projects, leaving little opportunity for new investment. Because so much of science now depends on computing, we must take a more holistic view of investment, examining scientific and technology priorities across all of the U.S. Federal agency portfolio and coordinating budgets accordingly. This is one of the key recommendations of the recent PCAST report on computing and a topic I discussed recently with Chris Greer, the new head of the National Coordination Office (NCO).
Outsourcing: Perhaps It Is Time?
In late November, I briefed the NSF OCI advisory committee on the PCAST report. The ensuing discussion centered on the rising academic cost of operating research computing infrastructure. The combination of rising power densities in racks and declining costs for blades means computing and storage clusters are multiplying across campuses at a stunning rate. Consequently, every academic CIO and chief research officer (CRO) I know is scrambling to coordinate and consolidate server closets and machine rooms for reasons of efficiency, security and simple economics.
This prompted an extended discussion with the OCI advisory committee about possible solutions, including outsourcing research infrastructure and data management to industrial partners. Lest this seem like a heretical notion, remember that some universities have already outsourced email, the lifeblood of any knowledge-driven organization. To be sure, there are serious privacy and security issues, as well as provisioning, quality of service and pricing considerations. However, I believe the idea deserves exploration.
All of this is part of the still ill-formed and evolving notion of cloud computing, where massive datacenters host storage farms and computing resources, with access via standard web APIs. In a very real sense, this is the second coming of Grids, but backed by more robust software and hardware of enormously larger scale. IBM, Google, Yahoo, Amazon and my new employer -- Microsoft -- are shaping this space, collectively investing more in infrastructure for Web services than we in the computational science community spend on HPC facilities.
I view this as the research computing equivalent of the fabless semiconductor firm, which focuses on design innovation and outsources chip fabrication to silicon foundries. This lets each group -- the designers and the foundry operators -- do what they do best and at the appropriate scale. Most of us operate HPC facilities out of necessity, not out of desire. They are, after all, the enablers of discovery, not the goal. (I do love big iron dearly, though, just like many of you.)
In the facility-less research computing model, researchers focus on the higher levels of the software stack -- applications and innovation, not low-level infrastructure. Administrators, in turn, procure services from the providers based on capabilities and pricing. Finally, the providers deliver economies of scale and capabilities driven by a large market base.
This is not a one size fits all solution, and change always brings upsets. Remember, though, that there was a time (not long ago) when deploying commodity clusters for national production use was controversial. They were once viewed as too risky; now they are the norm. Technologies change, and we adapt accordingly. Having said that, I believe there will always be a place for purpose-built HPC facilities for cutting-edge computational science, just as large-scale experimental facilities are purpose-built for other sciences. However, day-to-day science may be better served by leveraging standard facilities and economies of scale. John West made some of these same points on insideHPC.com the other day.
I began on a low note, looking backward at our (currently) dismal state of research funding. Looking forward, I see great opportunities. We are living in a time of great technical ferment, with heterogeneous multicore chips coming sooner than most realize and the stunning growth of Web-delivered services and information. I am not yet sure what the future will bring, but the vision of a national Memex, Vannevar Bush's 1940s dream of an information system capable of extending human capabilities, is within our reach.
Daniel Reed is Microsoft's Scalable and Multicore Computing Strategist and a member of the President's Council of Advisors on Science and Technology (PCAST). The opinions expressed above are his, not necessarily those of Microsoft or the Federal government. Contact him at Daniel.Reed@microsoft.com or his blog at www.hpcdan.org.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
May 10, 2013 |
Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.