HIGH-END CLIMATE SCIENCE REPORT RELEASED

January 26, 2001

SCIENCE AND ENGINEERING NEWS

EXECUTIVE SUMMARY

1) Background:

In January 2000, the Environment Division of the White House Office of Science and Technology Policy (OSTP) asked Dr. Richard Rood of the National Aeronautics and Space Administration (NASA) to form an ad hoc Working Group on Climate Modeling Implementation. The impetus for this request was the need to define and implement a strategy for climate modeling in the U.S. that responds to unmet National needs in climate prediction, climate-science research and climate-change assessment. The composition of the Working Group is given on the title page. There is a representative from each of the Agencies with a significant investment in global weather and climate modeling. The scientists in the Working Group have broad experience in climate and weather science, including atmospheric chemical modeling. In addition, the authors have experience in research and operational activities, high-end computing, and scientific and programmatic management. The Working Group also includes a sociologist who is expert in human systems and organizational change management. OSTP charged the Working Group to prepare a plan for climate modeling activities to serve as advice to OSTP and the US Global Change Research Program (USGCRP). The Working Group’s report “High-End Climate Science: Development of Modeling and Related Computing Capabilities” is summarized below.

2) Summary of Findings:

A) The requirements and expectations placed on the climate community have grown to the point that the U.S. requires the service of a dedicated organization, which is referred to here as the Climate Service. The Climate Service must operate as a product-driven research organization. This is in contrast to the discovery-driven research that is predominant in U.S. science programs. A successful product-driven Climate Service requires leadership, management, and business practices that are substantially different from those used in discovery-driven research activities. The following attributes are required:

o Clear definition of mission.

o Executive management with the responsibility of overseeing quality control and delivering the climate products.

o Unifying incentive structure that connects individual’s activities with organizational goals.

o Supporting business practices.

B) There are three fundamental issues that provide complex and conflicting challenges to the formation of a Climate Service.

o The high-performance computing industry has fundamentally changed. While this has provided better computational resources to many individual researchers, those applications that require the highest level of computing are struggling to remain viable. The tension is heightened by U.S. policy on supercomputers.

o There are not enough people to provide either the scientific or information technology expertise needed to sustain all of the U.S. climate-science activities that strive to provide comprehensive capabilities. Key positions are going unfilled and students are not being trained to fill either the scientific research positions or the esoteric niches of software engineering, computational science, and computer science required for a successful high-end climate capability.

o The multi-agency culture that developed to support the discovery-driven research activities is not well suited to support a more product-oriented climate service. A multitude of sub-critical activities reside in the different agencies, and there is no straightforward mechanism to allow concerted concentration of these resources towards common product-oriented goals.

C) The Climate Service must be cognizant of and responsive to foreign centers that are defining the state-of-the-art in assessment and simulation capabilities and, increasingly, in scientific research.

D) Issues related to high performance computing:

o Shared-memory, vector computers manufactured in Japan, and essentially unavailable to U.S. researchers, have a combination of usability and performance that gives them far more capability than computers available to U.S. scientists.

o Parallel computers manufactured in the U.S., often with distributed memory, are difficult to use. In addition, there are intrinsic limitations to the ability of climate-science algo-rithms to achieve high levels of performance on these computers.

o Japanese-manufactured computers already delivered to foreign centers assure that U.S. scientists will have significantly less computational capability for at least three to five years. With the delivery of the next generation of Japanese computers, and continuation of current approaches to computing in the U.S., the gap between the U.S. and foreign centers will increase and exist for longer than five years. The purchase of Japanese vector computers would have an immediate impact on climate and weather science in the U.S. and offers the only short-term strategy for closing the computational gap between U.S. and foreign centers.

o There is insufficient investment in the U.S. in software. A software infrastructure must be built to support both climate and weather activities. The software infrastructure must:

— Facilitate the interactions of scientists at different institutions,
allowing concurrent development in a controlled environment. — Facilitate the interactions of climate scientists and computational
scientists, allowing more robust use of computational platforms. — Include development of systems software necessary for the operation of the
hard-ware platform.

o The U.S. policy requiring the use of distributed memory, commodity-based processor parallel computers increases the size of the needed software investment. Japanese vector computers require substantially less expenditure on software. The risk is high that software developed for U.S.-available computers will not achieve the performance and reliability realized by that using Japanese-manufactured vector computers.

o Without the development of successful software, the deployment of large U.S.-manufactured hardware systems to increase computational capability is not justified.

o The development of U.S. computational platforms for the Climate Service is a research activity and the research must be driven by the climate applications rather than by technological development. As a research activity, the intrinsic risks are high.

E) Issues related to the shortage of human resources:

o In order to focus adequate climate-science expertise for the Climate Service, a multi-agency response is needed.

o Timely development of a Climate Service requires participation of presently existing capabilities.

o Integration of efforts across institutions and disciplines is needed to achieve critical concentration of expertise on priority problems.

o Competition for human resources with the mainstream information technology industry is high, and it will be impossible to populate the information technology staffs of multiple comprehensive climate-research centers.

F) Issues related to existing multi-agency culture:

o The current management and review process rewards individual accomplishments and tends to fragment efforts rather than focus them towards common goals.

o The reward and incentive structure that currently exists is not strong enough to allow coordinated, product-oriented goals to rise to a level to be competitive with internal Agency missions and programs.

o Fundamentally new management and business strategies are needed to support the product-driven Climate Service.

o The difficulties of facing these management issues are large and suggest that the initial implementation of the Climate Service should be as simple as possible. This is in conflict with need to integrate activities across institutions and disciplines to address human resource issues, to maintain similar levels of comprehensive-ness as foreign centers, and to keep up with scientific evolution.

o The management issues require more directed authority and decision making than is possible within umbrella organizations, like the USGCRP, which were designed to guide research rather than generate products.

o Without addressing these management issues, providing additional funds to the existing programs will not be effective in the development of the Climate Service.

3) Summary of recommendations:

A) Formation of a Climate Service:

o A Climate Service with a well-defined mission should be chartered to deliver simulation and related data products for understanding climate processes and predicting future states of the climate system.

— Built upon existing expertise. — Clear separation of Climate Service functions from current Agency
obligations. — Not be located or assigned to any Agency or Center within the
current multi-Agency framework.

o We propose that an independent service, which is a concerted federation of the appropriate current agency capabilities, should be formed. The existing agencies need to act like member states, drawing from a concept successfully used in the European Union.

B) Management and Business Practices:

Without a new business model incremental funding of existing organizations will not provide needed capabilities. The Climate service requires:

o An integrating management structure.

— Executive decision-making process. — Supporting incentive structure.

o Supporting business practices.

o Appropriate types of external review and oversight process.

o Stability.

o Insulation from short-term programmatic volatility

C)Computational Resources:

oThe Climate Service requires:

Dedicated computational resources with the highest level of capability. Computational resources must be:

— Aligned with the generation of the Climate Service products (i.e. application driven). — Under the management of the Climate Service.

o U.S. policy on high performance computing adversely affects the Earth sciences. This increases both the expense and risk associated with climate science.

D) Number of centers / integration:

o We recommend two major core simulation activities. The first is focused on weather and should build from the National Weather Service. The second is focused on climate, and its definition requires successfully addressing a number of the strategic and organizational issues discussed throughout this document.

o The decisions on what should be included in a nascent climate service, e.g. seasonal-to-interannual, greenhouse scenarios, chemistry, data assimilation, etc., are among the most difficult to reconcile. There is a need to integrate activities across institutions and disciplines to address human resource issues, to maintain similar levels of comprehensiveness as foreign centers, and to keep up with scientific evolution. This is in conflict with the difficult management challenges that suggest the initial implementation of the Climate Service should be as simple as possible. The complexities of the integration issues are beyond the scope of the current deliberations.

E) It is critical that initial steps be made to develop a credible and competitive high-end climate capability, and we are concerned that potential agency and political positioning over the location and running of a potential Climate Service will delay its formation. The Weather Service and the Climate Service should undertake the development of the formation a unifying infrastructure to allow effective transfer of expertise and algorithms.

F) Size and budget of core simulation capability for Climate Service

o On the order of 150 scientists, software engineers, and application-directed computa-tional scientists, programmers, and computer scientists need to be dedicated to the modeling capabilities of the Climate Service.

o The total funding for the modeling and computing component of the Climate Service is on the order of $50 M. There are large uncertainties in this number because of computational policy issues that are beyond the scope of the climate-science community. The $50 M is a lower limit.

4) Final Comments:

The details of implementation of the Climate Service will require significant planning and be dependent on a number of interrelated decisions that must be made by the Agencies. Strong leader-ship is required, both within the Agencies and at a level higher than the Agencies. The implementation can and should be incremental. In fact, we believe that with the definition of a stable vision there are a number of existing activities that could form the core of a future Climate Service. There are already moves by all of the Agencies to better integrate and unify modeling and computational activities. If these can be orchestrated towards a long-term vision, then substantial steps can be taken while the details of the Climate Service are developed and evolved. Again, without a new business model and management strategies within which to organize the Climate Service, there is a danger of simply rearranging the current activities, which will not be successful. Finally, we emphasize that is artificial to speak of a climate-science capability, a national climate service, without integration of modeling and data (i.e. observational) activities. As charged, we addressed the data activities, but they were not explored in as much depth as the modeling activities. We state, explicitly, that many of the same underlying problems affect the environmental data under-takings of the U.S. as affect the modeling community, and integrated, systematic solutions are ulti-mately needed. Additional funding is crucial to both develop foundation climate observing systems and to integrate and maintain existing data sets for climate applications.

For the full report, see

http://www.usgcrp.gov/usgcrp/Library/models2001/default.htm

============================================================

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, and this day of contemplation is meant to provide all of us Read more…

Intel Announces Hala Point – World’s Largest Neuromorphic System for Sustainable AI

April 22, 2024

As we find ourselves on the brink of a technological revolution, the need for efficient and sustainable computing solutions has never been more critical.  A computer system that can mimic the way humans process and s Read more…

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

AI Saves the Planet this Earth Day

April 22, 2024

Earth Day was originally conceived as a day of reflection. Our planet’s life-sustaining properties are unlike any other celestial body that we’ve observed, Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire