Visit additional Tabor Communication Publications
September 11, 2008
The big news in the science community this week was the kickoff of CERN's Large Hadron Collider (LHC), The $10 billion atom smasher that sent its first proton beams through the device's 17-mile underground tunnel in Switzerland and France. These initial tests were the culmination of 15 years of planning and development that brought together 80 countries and thousands of individual researchers around the world. While it remains to be seen what scientific discoveries will eventually result from the LHC experiments, there is no doubt it represents the biggest and most ambitious global science project today.
Today, though, I'm going to talk about another set of science community partnerships, although these have received much less attention from the press. For the past seven years, the U.S Department of Energy's (DOE) Office of Science has opened the doors to its terascale supercomputers and changed the way many U.S. scientists are doing cutting-edge research. Through the SciDAC and INCITE programs, the Office of Science has expanded the high-end computing capabilities of the agency, while spreading supercomputing talent and hardware resources across the broader research community.
In most cases, these collaborations were confined to U.S.-based science, but in others cases, the DOE partnered with researchers from around the world. In fact, the DOE (along with the NSF) invested $531 million in the aforementioned LHC project and helped design and build the ATLAS and CMS detectors through two of its labs -- Brookhaven in New York and Fermilab in Illinois.
Since U.S. government agencies compete for budget dollars and attention, the natural reaction is for each agency to guard its resources. So in many ways, the opening of the DOE's Office of Science was an unlikely path. Perhaps even more unlikely is the individual who led the charge: Dr. Raymond Orbach, the director of the Office of Science. In 2002 Orbach was appointed by the Bush administration -- a group not exactly known for its collaborative style of governance, much less its love for open science, or, in some cases, science at all. But Orbach proved to be an true leader in promoting partnerships with other agencies, universities and even industrial organizations.
The INCITE program, in particular, changed the nature of computing at the DOE. Up until 2002, agency computers were primarily used for DOE grantees. At that point, Orbach devised the INCITE program, which opened up DOE supercomputing resources to the science community. The program was designed so that supercomputing cycles were allocated on a competitive basis, in which only the most capable organizations and the most interesting problems were given time on the machines. In a nutshell, the idea was to make the best hardware available for the best science. "It sounds completely reasonable now, but I can tell you back in 2002, there was lot of speculation and complaints that I was opening up our computers to the world," admits Orbach.
In each succeeding year the program expanded its allocations. In 2008, 265 million CPU hours on DOE machines were awarded to 55 projects, eight of which are from industry, 17 from universities, and 20 from DOE labs as well as other public, private and international researchers.
As it turns out, INCITE will also provide the structure for computer allocations announced on Monday for a new partnership with NOAA. In this case, the Office of Science will make available more than 10 million hours of computing time for NOAA to develop and refine advanced climate change models. The work will be performed on the latest computing hardware at three DOE labs: Argonne, Oak Ridge, and NERSC at Lawrence Berkeley.
Although the DOE has worked with the climate community before, it's mostly been done via lower-level collaborations between PIs across agencies. As Pete Beckman, Argonne's Interim Director, puts it: "This really says we want to move together in a strategic way. And that's very important." He sees the new collaboration as a way to move the national climate and weather modeling work forward under a more unified structure. At Argonne, they've already begun porting some of the NOAA codes to run on their 557-teraflop Blue Gene/P system. Under this new framework, Beckman believes over the next couple of years we should be able to "dramatically improve our capabilities for weather and climate prediction."
The collaboration between the DOE and NOAA has been formalized in a memorandum of understanding (MOU), but is being done under the general framework of the Climate Change Science Program (CCSP), which was instituted in July 2003. The program brought together not just NOAA and the DOE, but also NCAR and ten other federal agencies. The rationale was to bring some cohesion to the climate codes being developed across the United States. "At the time the CCSP was promulgated, the United States was behind in high-end computation," says Orbach. "The Japanese Earth Simulator was the fastest machine in the world and we didn't have any open science capability to match it."
In 2004, the U.S. took back the top spot in supercomputing with BlueGene/L and has maintained the lead ever since. But not all U.S agencies were endowed with leading-edge supercomputers or the software talent they attracted. The NOAA, which is administered by the Department of Commerce, has much less HPC capability than federal agencies like the Department of Defense and the DOE. Currently, the most powerful system owned by NOAA is a relatively modest 25-teraflop IBM Power6-based system. "In my view, this MOU is a recognition of where each of the agencies is at this point in time, and frankly a rationalization of their capabilities and talents," explained Orbach.
In addition to the retaking of supercomputing leadership in 2004, the climate modeling community also expanded. Through the Atmospheric Radiation Measurement (ARM) and the SciDAC program (a program begun in 2001 that brought together top researchers in a variety of scientific disciplines), the Office of Science has developed deep expertise in both climate measurement and global climate change modeling. According the Orbach, the accumulation of this expertise over the last five years is at least as important as the new hardware in moving the climate models forward.
Under the new partnership, the NOAA codes will become open to the community, and Orbach is hoping that the software will be optimized with the help of SciDAC researchers. NOAA currently uses one of its home-grown Geophysical Fluid Dynamics Laboratory (GFDL) codes to predict hurricanes, but it is currently limited to a grid model based on 9 km granularity. Orbach says to get really accurate models, you need to get down to the 1 km level. By optimizing the software, Orbach thinks you can pick up a couple orders of magnitude in "effective speed," and notes that SciDAC has made similar improvements in other codes they've worked on.
NOAA's GFDL climate codes and the DOE-NCAR Community Climate System Model (CCSM) codes are the two national major climate models developed in the U.S. The CCSM model is already being run extensively on DOE supercomputers, while porting of the GFDL code is imminent. Whether the codes become integrated at some point or continue to diverge remains a question, although Orbach has his own take on this.
"Ultimately, I would like to see the United States have a single code," he says. "That's what the Europeans have agreed on and they have many more partners than we have. As a consequence, they've been able to develop a common code for the whole European community and have made really wonderful advancements. This multiplicity of codes -- I don't know how it's going to shake out."
Looking further out, Orbach would like to see the climate change models incorporate human factors. Today the climate codes only take into account the physical system -- the oceans, the atmosphere, land masses, etc. But human behavior can be modeled as well, and since people will necessarily change their behavior in response to climate policy decisions -- for example, energy pricing, new energy sources, and conservation measures -- that feedback must be part of the climate model to produce an accurate prediction. More importantly, the policy makers themselves would need access to those models so they could run different scenarios for policies they are considering.
Integrated models like this are already being talked about in anticipation of multi-petaflop and eventually exaflop DOE machines. The agency has already donated a million CPU hours to the National Endowment for the Humanities (NEH) to begin to generate interest in this type of application. Also, workshops at Berkeley have been set up to teach social scientists how to make use of these leading edge supercomputers.
"We're not there yet," Orbach told me. "I don't know how fast our computers are going to have to be or how good our codes are going to have to be, but you can see where we're going."
Orbach probably won't be around to usher in these next-generation applications though. As a political appointee, his tenure at the DOE ends in four months, when Bush leaves office. He assumes that whoever prevails in the presidential election will want their own person to head the Office of Science. "I disappear at noon on January 20th," notes Orbach.
Theoretically, his INCITE program could be ditched or scaled back by new leadership, but that's highly unlikely. The program is already too popular in the science and technology community. What's more likely is that the next director will build on the foundations that Orbach has built over this 7-year tenure. And with public awareness of climate change and energy policy at an all-time high, the DOE may well be the most important agency of the U.S. government in the next administration.
Posted by Michael Feldman - September 10, 2008 @ 9:00 PM, Pacific Daylight Time
Michael Feldman is the editor of HPCwire.
No Recent Blog Comments
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.