Visit additional Tabor Communication Publications
November 07, 2008
When James Hack came to Oak Ridge National Laboratory (ORNL) at the end of 2007, he was given two hats: one as the director of ORNL's National Center for Computational Sciences (NCCS) and the other as leader of ORNL's laboratory-wide climate science effort.
At the helm of the NCCS, he guides the most powerful open science supercomputing center in the world. The NCCS hosts leading research in climate dynamics and the development of alternative energy sources, as well as a wide range of computational sciences -- from basic explorations in nuclear physics and quantum dynamics to astrophysics explorations of supernovas and dark matter.
As leader of ORNL's Climate Change Initiative, he is in charge of pulling together scientists and engineers from across ORNL to advance the state of the science. Hack is uniquely qualified to take on this role. Before coming to ORNL, he headed the Climate Modeling Section at the National Center for Atmospheric Research (NCAR) in Boulder, Colo., and served as deputy director of the center's Climate and Global Dynamics Division.
We asked Hack about the future of climate science and the climate initiative at ORNL.
HPCwire: How will climate research evolve in the coming years?
Hack: Climate science has largely been curiosity-driven research. But the growing acceptance that humans affect the evolution of atmospheric composition, land use, and so on, all of which in turn affect the climate state, provides a little more focus and a little more urgency to taking a harder look at what the modeling tools are capable of providing in the form of specific consequences for society.
That to me is the transformation. There's a growing need for improvements in simulation fidelity and predictive skill. The potential consumers of that kind of simulation information will be leaning hard on the climate change community to provide answers to their questions. That's the change that's going to differentiate the next 10 years of climate change science from the previous 30.
For example, we know from observations over the last 50 years that the snowpack in the Pacific Northwest has been decreasing. At the same time, temperature in the same region has been increasing. If that trend continues, it raises lots of concerns for water resource managers who have counted on storing their water in the form of snow until a certain time of year when it starts melting.
If precipitation never comes down as snow or if it starts melting sooner than you need it, you may not able to meet your water demands. It's an example of an infrastructure that's vulnerable to specific changes in a region's climate state. Many of the solutions to this problem may also bring with them other environmental consequences.
HPCwire: So what can you do to help users of climate data?
Hack: We need to know if we can tie down with some certainty how climate will change on the scales that matter to people. It's one thing to tell somebody that the planet's going to warm by 2 degrees centigrade between now and 2100, but it doesn't really help anybody who's in the business of planning or managing societal infrastructures on regional scales. We know from the models that it won't be a homogeneous change. The high latitudes are going to feel maybe 8-degree increases in temperature, and the lower latitudes are going to feel considerably less. And quantifying changes in the hydrological cycle on regional scales may be even more important than temperature changes.
We think we might currently have sufficient skill to project climate change on regional scales about the size of the Southeast, Pacific Northwest, Rocky Mountain West, or Farm Belt. As a community we need to demonstrate that the potential is really there and try and quantify what the uncertainties are. We haven't done a very good job with this challenge so far. But I think the scientific community is starting to realize that we have an opportunity to take a step back and ask, "What can we do on regional scales and timescales that we think are predictable?"
For example, there's a belief that climate statistics have some predictive skill on decadal timescales. The driver for that is going to reside in the ocean, the motion scales of which have a very, very long time frame. There is a belief in the scientific community that the ocean's behavior can be predicted several decades into the future.
If you can do the ocean part of the problem, given the fact that 70 percent of the planet is covered with water, you have a very strong constraint on the other parts of the system. Then the question is, "Will the other component models follow?" The atmosphere doesn't have any deterministic predictive skill beyond a few weeks. So you're dealing with statistics that are forced by components of the climate system that have much slower variability than the atmosphere. Even the terrestrial components of the climate system, particularly land use changes, come into play on longer timescales.
HPCwire: Are we ready to make predictions about the ocean?
Hack: As is the case with the atmosphere, we're still building knowledge about the ocean component. A difficult challenge will be initializing the ocean state for the purpose of prediction. I believe there's a tremendous opportunity for people who want to pursue the ocean initialization problem.
To deliver decadal prediction we will need to treat the climate problem as an initial value problem and not a hypothetical boundary-value problem. Besides getting the statistical behavior right, you need the phase of low frequency variability to be correct as well. For example, predicting when an El Niño will occur or when a La Niña will occur. If we can accurately predict this type of ocean behavior, there is evidence that other features of the climate state can be accurately reproduced. That's a matter of correctly initializing the model and accurately incorporating all the necessary physics in the respective component models.
HPCwire: How do you demonstrate that you're getting it right?
Hack: We can come up with numerical experiments to assess whether the global model can produce useful information on the timescales and space scales of most importance to resource managers and planners. They may want to know where the temperature's headed locally, how the hydrological cycle is likely to behave, or how extreme events will change. Do the models provide us with the kind of predictive skill we need, and if not, how can they be improved?
When you start windowing down to very small space scales, at what point does the uncertainty or natural noise in the system begin to swamp the signal that you're trying to find? We can illuminate that with retrospective simulations because we have lots of data for an instrumented period that's multidecadal. It's not all the same quality, but it quantifies what's happened in the climate record in a much more complete way, say, than going back to paleoclimate times or even going back a few centuries. Retrospective simulations over the latter part of the 20th century can help to quantitatively establish what the models are capable of doing or not capable of doing on relatively fine spatial scales.
HPCwire: What is the role of computing in this effort?
Hack: Computing is a big part of the effort. To fully evaluate the skill in our modeling tools, we need very large computer systems -- petascale machines. Assimilating data streams that will be used in the evaluation of modeling frameworks requires very large computer and data systems.
Clearly, a significant computational piece is modeling -- building models that have all the components they need to accurately predict the evolution of the earth's climate system. That's computationally very intensive. Incorporating the complexities of the carbon cycle in these models, using the expertise of ORNL's Environmental Sciences Division, contributes to the computational demands. And then mining the data to deal with questions of human impacts and climate extremes, that again is very computationally intensive.
So computation does in fact tie the whole effort together. It cuts across all the various climate science applications. There are certain areas of science where you need a virtual laboratory to explore the what-if experiments, and that's what computation provides for the climate problem.
Global modeling is something that has been funded under programs like SciDAC [Scientific Discovery through Advanced Computing] and other DOE programs in partnership with other national labs like NCAR. For example, there's an almost 20-year history of ORNL partnering with NCAR on the development of global models and implementing global models efficiently on high-performance computing systems. We are also in the process of building strong new relationships with our NOAA [National Oceanic and Atmospheric Administration] and NASA [National Aeronautics and Space Administration] climate modeling colleagues, looking at high-resolution global modeling, quantifying predictive skill on climate timescales, identifying climate extremes in global simulations, and exploring climate impacts in the context of integrated assessment modeling. All this builds on strong preexisting partnerships with many other DOE laboratories.
HPCwire: You are leading a new multidisciplinary effort at ORNL focused on climate science. What is the reasoning behind this effort?
Hack: ORNL has identified climate change as an opportunity that could very effectively exploit existing competencies, particularly high-performance computing and ORNL's long history in contributing to fundamental knowledge about carbon science and in global modeling. The lab also has expertise in evaluating impacts on societal infrastructure. Take rising sea levels. Most of the folks living around the world live close to coastlines, so if the sea level rises even a meter, it has a huge societal impact. The people who are displaced must go somewhere else, maybe moving into areas that were previously used for agriculture. That displaces agricultural activities. ORNL has a very strong GIS [geographic information systems] group that can contribute to quantification of these scenarios.
So we're looking at how we can bring these various competencies together to provide a capability that's unique among the laboratories. The end result for us is to provide stakeholders, resource managers, and others with information they need to deal with the consequences of climate change.
HPCwire: What will ORNL's initiative look like?
Hack: It's a cross-cutting initiative. We're trying to engage people from across the laboratory to stretch the kind of work they're doing in such a way that it requires partnerships with other ORNL folks. So far, many of the more promising proposals include collaborations that cut across the Biological and Environmental Sciences Directorate and CCSD [Computing and Computational Sciences Directorate].
As the initiative matures, I hope we'll begin to incorporate more people in the energy arena, another strong part of the ORNL scientific program. These things could include ways to link climate change and the hard questions we're facing in energy production, like bioenergy and renewable energy technologies, as well as energy consumption. Dealing directly with climate mitigation questions, such as strategies for the sequestration of carbon, is an opportunity for this initiative.
From an energy production point of view, planning has a multidecadal timeframe. Anyone planning investments in the energy infrastructure needs to understand what role the environment might play. That's the goal -- to be able to say 20 years from now, "Here's what we anticipate will happen with regard to environmental change on a regional scale."
Jun 19, 2013 |
Supercomputer architectures have evolved considerably over the last 20 years, particularly in the number of processors that are linked together. One aspect of HPC architecture that hasn't changed is the MPI programming model.
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.