A Global Climate of Change

By Christopher Lazou

September 21, 2007

“Our civilisation is destroying itself because it is determined to disregard all limits in all areas.”
     -Dominique Bourg, philosopher of sustainable development

Global warming and its dire consequences have at long last permeated the special interest barriers and are at the centre of political debate. A recent EU Research magazine produced a special feature with the title: “Climate Change: We can’t wait any longer,” stating: “The 4th IPCC report was issued and adopted this spring amidst a blaze of publicity and debate. It summarises two decades of important multidisciplinary research and formally concludes that the symptoms of global warming due to human activity are all too real, and will inevitably progress faster than was previously thought. We must act.”

The “business as usual” model will inevitably increase global warming and consequently destruction, faster than previously predicted. The European Union (a high polluter) has taken onboard the IPCC conclusions and the findings of Nicholas Stern, an economist and author of a forceful report commissioned by the UK government on the cost of global warming. The EU already agreed to drastically reduce green house gas emissions by 20 percent between now and 2020.

The recent Asian Pacific Economic Community (APEC), which includes the USA, China, India, Australia — the biggest polluters — are however dragging their feet. This is likely to reduce the pressure on EU countries to deliver their targets. In the meantime, the devastation of property, infrastructure and threat to life continue unabated. Heat waves, forest fires, drought, flooding and hurricanes are becoming everyday news items.

It was in this climate that from Sept. 9-13 about eighty meteorologists and HPC experts from large-scale computing centres from eleven countries attended the bi-annual Computing in Atmospheric Sciences (CAS) workshop on the use of HPC in meteorology, held at the idyllic Imperial Palace Hotel, Annecy, France. The workshop was organised by the National Centre for Atmospheric Research (NCAR), USA. This excellent small and friendly workshop provided a tour de force in meteorological and computing techniques by active practitioners, some of them IPCC contributors, striving to maximise the latest HPC technology to refine and improve their climate prediction models.

Refinement of models is an urgent requirement for the development of realistic mitigation strategies to address the potential catastrophic consequences of global warming. These talks were augmented by speakers from broader scientific centres of excellence, like CERN, NERSC and ORNL, and from funding bodies such as the NSF, in the USA.

Most presenters came from sites in the USA with large Cray XT3/4, IBM Power5/6 and Blue Gene/L systems, while the European and Australian contingent included a strong representation from sites with large-scale parallel vector NEC SX-8 systems. The main HPC vendors described their upcoming products, their vision for petaflops computing and the technology advances needed for exaflops systems. What became abundantly clear during this workshop is that the “business as usual model” — be it in human activities as a whole or in developing computing technologies — is not a realistic option and radical new approaches are needed. This article highlights a few of the many climate and technology issues raised by presentations given at this workshop.

There were 38 presentations in three and a half days, some describing grid’s enabling potential for international collaboration, such as CERN and also within the community of climate system modelling. The talks were crammed with technical information on how to use parallel supercomputers for computation using mathematical models that describe climate/weather patterns over time. These were interspersed with weather maps and video pictures from simulations and compared with satellite pictures of actual weather events.

Why are meteorologists doing all this Earth System Modelling and what is the urgency? As stated above, dramatic flooding and other extreme events related to climate change are happening and frequently reported in the press and on television. For example, it has just been reported that satellite images show that the North West passage connecting the Atlantic and the Pacific oceans is free of ice, making it navigable for the first time since records were kept. Also, seventeen central African countries are currently flooded, with millions of peoples’ homes and crops devastated. “It is common knowledge that it is the countries of the south that stand to be hardest hit by global warming when at present it is the countries of the north that are the biggest polluters,” Nicholas Stern was reported as saying.

Climate simulations show that green house gases attributed to human activities are causing an increase in the Earth’s average temperature. Consequently, fires from intensely hot summers and flooding from heavy rainfall are becoming more common. These images of devastation and the economic aftermath are injecting a political dimension into the proceedings.

A number of talks dealt with prediction and mitigation strategies, to deal with devastating events such as flooding and hurricanes. The costs of these events are enormous. Hurricane Mitch caused the deaths of over 9,000 people in Nicaragua mostly from flooding and landslip.

Greg Holland from NCAR gave a stimulating keynote talk titled: “Anthropogenic Influences on Intense Hurricanes,” which focused not only on observed hurricanes, but also on the scientific evidence for causal attribution.

He explained that apart from direct impacts, indirect impacts arise from forecast uncertainty, design and preparations and imperfect knowledge of cyclone parameters. Coastal impacts from tropical cyclones include harbour damage, house and crop destruction, forest damage from wind, waves, flooding and landslips. Excluding the loss of life, the direct damage from hurricane Katrina was about $80 billion dollars and an additional $40 billion as indirect costs to the USA. About 95 percent of the oil and gas production in the Gulf was disrupted and about 150 oil rigs were lost. Tornadoes and flooding reached as far away as Quebec and people were displaced from most coastal states. Government recovery costs were $10-15 billion. The Consumer Price Index (CPI) impact was around 1.4 percent to 2.3 percent. The total CPI cost was estimated to be: $16 to 26 billion. The cost per household ranged from $140 to $230. Reduction in economic growth rate was about 1 percent, but this was compensated by a subsequent overshoot in the economy. In the USA, 50 percent of the population live within 50 miles from the coast and the cost for evacuating one mile of coast is about $1 million per day. Note that recovery from a storm impact takes years.

Greg went on to convey the IPCC position on hurricanes: “It is likely that increases have occurred in some regions since 1970; it is more likely than not a human contribution to the observed trend; it is likely that there will be future increasing trends in tropical cyclone intensity and heavy precipitation associated with ongoing increases of tropical SSTs [sea surface temperatures].” He demonstrated with detailed graphs that the bulk of the warming since 1970 is due to anthropogenic effects. He reviewed Atlantic SST and atmospheric modes of variation and demonstrated that these are not accounted for by natural variability. His conclusion was unequivocal: “Anthropogenic climate change is substantially influencing the characteristics of North Atlantic tropical cyclones through complex ocean-atmosphere connections and may be influencing other regions.”

The above view was reinforced by Dr. Warren Washington in his talk titled: “Computer Modeling of the 21st Century and Beyond.” From the 1970s, Warren sat on a committee that advised six U.S. presidents on climate issues. He is also heavily involved at NCAR in the Community Climate System Model (CCSM), which has produced one of the largest data sets for the IPCC fourth assessment. Echoing Greg’s sentiments, he said, “As a result of this and other assessments, most of the climate research science community now believes that humankind is changing the Earth’s system and that global warming is taking place.” He also told me that every question raised by sceptics was thoroughly reviewed and rigorously refuted. “Natural events do not explain global warming. The smoking gun is human emissions, and once included, the warming can be reproduced from year to year,” he stated.

According to Paul Crutzen, another IPCC participant: “The only criticism that could be made of the IPCC report is of it being too cautious.”

For HPC vendors the good news is that there is a lot of new work to be done requiring oodles of computing power. The computing requirements are for data assimilation, modelling internal oscillations, prediction of external forces and hurricane/climate feedback. A system delivering 200 teraflops of sustained performance would be appreciated and utilised today.

In fact, most sites pursuing climate change research are well endowed with computer resources. Three years ago they were lucky to muster 0.5 teraflops of sustained performance. Current procurements in progress have minimum requirements of around 10 terflops of sustained performance in phase one followed by at least twice that by 2009. The climate applications, of course, can use an order of magnitude more power now without waiting for the end of the decade. The tantalising fact is at least one system capable of delivering 200 teraflops of sustained performance with less than 9,000 processors will be available from Japan early next year, but it is unlikely to be sold in the USA.

In the next few years, the CCSM will be further expanded to include reactive troposphere chemistry, detailed aerosol physics and microphysics, comprehensive biogeochemistry, and ecosystem dynamics, and the effects of urbanization and land use change. These new capabilities will considerably expand the scope of earth system science that can be studied with CCSM and other climate models of similar complexity. Higher resolution is especially important near mountains, river flow, and coastlines. Full hydrological coupling including ice sheet is important for sea level changes. It will include better vegetation and land surface treatments with ecological interactions as well as carbon and other biogeochemical cycles.

For example, Dave Randal has a five-year NSF grant to study clouds using high-resolution models. Clouds are central to earth sciences, climate change, weather prediction, the water cycle, global chemical cycles and the biosphere. Dave stated: “We are being held back in all of these areas by an inability to simulate the global distribution of clouds and their effects on the Earth system.” The need for high resolution catapults this application into the realm of petaflops computing.

The computer requirements for the next generation of comprehensive climate models can only be satisfied by major advances in computer hardware, software, and storage. The classic climate model problems with supercomputer systems are: The computers (with the exception of vector systems) are not balanced between processor speed, memory bandwidth and communication bandwidth between processors, including global computational needs. They are more difficult to program and optimize; it is hard to get I/O out of computers efficiently and computer facilities need to expand archival data capability into the petabyte range. There is a weak relationship between peak performance and performance on actual working climate model programs.

Thus with sustained teraflops computing performance now on stream, meteorologists are moving from climate to Earth System Modelling (ESM). This is because feedback loops between climate, ecology and socio-economics are significant. Climate modelling is not possible without proper representation of these systems. Earth System Modelling is: multi-scale (time and space), multi-process, multi-topical (physics, chemistry, biology, geology, economy, etc.). which is both compute- and data-intensive. Some people claim it requires several orders of magnitude more computing power to tackle the problem. Petaflops and exaflops are therefore eagerly awaited.

Stefan Heinzel, director of Garching computing centre of the Max Plank Society (RZG), Germany, gave a keynote titled: “Toward Petascale Computing in Europe – A Challenge for the Applications and the Hardware Vendors.” He listed the current petaflops projects and their likely hardware architectures. The Riken project in Japan, the Cray Cascade and the IBM PERCS in the USA were all mentioned. Doing the math, he indicated that one petaflop of sustained performance would need hundreds of thousands of cores. Even for vector machines, it could be around 50 thousand cores. The question is how does one deal with O(50000) parallelism using local memory and MPI? What about the CPU memory gap, which keeps widening as CPUs grow much faster per year than the 7 percent increase of memory speed?

Transistors on an ASIC are still doubling every 18 months at constant cost, but in the last two years, neither AMD nor Intel announced significantly faster cores. Performance improvements are now coming from an increase in cores on a processor. Presently four cores are standard; soon this will be eight. Intel already announced an 80-core processor technology. IBM doubled the performance from the Power5 to Power6 reaching 4-5 GHz, but using 2 cores. The Power6+ could increase the frequency incrementally, but doubling the frequency of the Power7 is going to be difficult. After the year 2015 one envisages about 512 cores or more (nanocore).

Sequential applications are of O(1). There is no substantial performance increase delivered by faster cores. It is the same on the desktop and on HPC systems. The snag is that memory speed increases only seven percent per year, with no improvement of latency of the cache architecture or memory bandwidth. Current HPC applications can use O(K) MPI tasks, mapped to threads. “Classical” scaling can achieve not higher than O(3,000); while the Blue Gene/L can achieve O(30,000). Higher scalability in the range of O(M) requires new technologies in the processor and in the nodes. An SMP programming model between hundreds of cores requires hardware support for lock mechanisms, transactional memory for atomic updates, new micro architecture for latency hiding and pre-fetch hints, i.e., with “assist threads.” The memory bandwidth wall is the limiting factor for scalability of the multicore architecture.

The file and I/O system needs to support hundreds of thousands of clients. To solve the scalability problem, a low latency communication is required. One expects a huge number of files in a single file system — trillions of files with terabyte-per-second transfer rates. For robustness different techniques are also needed, since RAID6 is not an adequate I/O connection mechanism.

For petaflop systems, the operating system (OS) and middleware need to have awareness of massive parallelism. OS failures have to be reduced. OS jitter impacts can have dramatic performance degradations for applications. Lightweight operating systems or special reduced standard kernels should be considered. There is necessity for interrupt synchronization and dynamic management of various page sizes. Applications should adapt their page sizes dynamically, which should result in a reduction of boot time and time to load an application. Hierarchical concepts should be implemented to solve scalability issues; hierarchical daemon structures are required for supporting hundreds of thousands of clients and for queuing and monitoring systems.

And, of course, one hits the processor power wall. Smaller cores help, providing more operations per watt. A lower frequency also helps to increase the number of cores, i.e., more operations per watt. This implies higher scalability, but the ratio of sustained performance is an open question. Memory and huge caches use significant power too, as does the interconnect. The challenge is to optimize the power consumption of each component.

In summary, the use of multicore, and later nanocore, architectures implies a challenge for petascale applications. There is a need to hide the memory wall. Cache architecture and memory have to improve latencies and bandwidth. New synchronization mechanisms have to be realized with SMP parallelism becoming more important. Helper threads will support pre-fetch techniques, but the applications have to improve latency tolerance. The interconnect limit implies new programming techniques are needed. Increased power consumption is the most critical problem due to budget limitations. The necessity of reinventing parallel computing is an enormous challenge caused by the massive increase of cores in future architectures.

The Cray XT series and IBM Power and Blue Gene systems, as well as other vendors in the USA and vendors such as NEC and Fujitsu from Japan, are developing petaflops systems, but how effectively these systems can be utilised is an open question and fraught with a myriad of challenges.

During the last CAS workshop in 2005, a strong emphasis was placed on data management and the challenges it entails. This time, the emphasis was more on power used by supercomputers (carbon footprint) and the facility footprint (space) requirements.

In my view, global warming is the most pressing challenge of the 21st century, and we all need to reduce our carbon footprint and become carbon neutral. Our political leaders should be judged whether they are “fit for purpose” and then held accountable for the mitigation policies they enact. When Mr Alan Greenspan, the former federal reserves (U.S. central bank) chairman, says in his newly published autobiography that “the Iraq war is largely about oil,” he should have added that it is also an irresponsible way of avoiding making the economic decisions needed to start mitigating “an inconvenient truth,” global warming.

The workshop presentations are available on the NCAR Web site: http://www.cisl.ucar.edu/dir/CAS2K7/.


Copyright (c) Christopher Lazou, HiPerCom Consultants, Ltd., UK. September 2007. Brands and names are the property of their respective owners.

For additional information, see the NCAR announcement on the eighth biennial session of Computing in Atmospheric Sciences (CAS2K7) at http://www.hpcwire.com/hpc/1791812.html.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Wondering How AI is Doing Versus Doctors?

September 26, 2017

With all the noise around AI’s potential in medicine, you may be wondering how well it is actually performing. No one knows the real answer - for one thing it is a moving target - but the IEEE Spectrum is attempting to Read more…

By John Russell

Cray Completes ClusterStor Deal, Sunsets Sonexion Brand

September 25, 2017

Having today completed the transaction and strategic partnership with Seagate announced back in July, Cray is now home to the ClusterStor line and will be sunsetting the Sonexion brand. This is not an acquisition; the ClusterStor assets are transferring from Seagate to Cray (minus the Seagate ClusterStor IBM Spectrum Scale product) and Cray is taking over support and maintenance for the entire ClusterStor base. Read more…

By Tiffany Trader

China’s TianHe-2A will Use Proprietary Accelerator and Boast 94 Petaflops Peak

September 25, 2017

The details of China’s upgrade to TianHe-2 (MilkyWay-2) – now TianHe-2A – were revealed last week at the Third International High Performance Computing Forum (IHPCF2017) in China. The TianHe-2A will use a proprieta Read more…

By John Russell

HPE Extreme Performance Solutions

HPE Prepares Customers for Success with the HPC Software Portfolio

High performance computing (HPC) software is key to harnessing the full power of HPC environments. Development and management tools enable IT departments to streamline installation and maintenance of their systems as well as create, optimize, and run their HPC applications. Read more…

SC17 Preview: Invited Talk Lineup Includes Gordon Bell, Paul Messina and Many Others

September 25, 2017

With the addition of esteemed supercomputing pioneer Gordon Bell to its invited talk lineup, SC17 now boasts a total of 12 invited talks on its agenda. As SC explains, "Invited Talks are a premier component of the SC Read more…

By Tiffany Trader

Cray Completes ClusterStor Deal, Sunsets Sonexion Brand

September 25, 2017

Having today completed the transaction and strategic partnership with Seagate announced back in July, Cray is now home to the ClusterStor line and will be sunsetting the Sonexion brand. This is not an acquisition; the ClusterStor assets are transferring from Seagate to Cray (minus the Seagate ClusterStor IBM Spectrum Scale product) and Cray is taking over support and maintenance for the entire ClusterStor base. Read more…

By Tiffany Trader

China’s TianHe-2A will Use Proprietary Accelerator and Boast 94 Petaflops Peak

September 25, 2017

The details of China’s upgrade to TianHe-2 (MilkyWay-2) – now TianHe-2A – were revealed last week at the Third International High Performance Computing Fo Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Machine Learning at HPC User Forum: Drilling into Specific Use Cases

September 22, 2017

The 66th HPC User Forum held September 5-7, in Milwaukee, Wisconsin, at the elegant and historic Pfister Hotel, highlighting the 1893 Victorian décor and art o Read more…

By Arno Kolster

Stanford University and UberCloud Achieve Breakthrough in Living Heart Simulations

September 21, 2017

Cardiac arrhythmia can be an undesirable and potentially lethal side effect of drugs. During this condition, the electrical activity of the heart turns chaotic, Read more…

By Wolfgang Gentzsch, UberCloud, and Francisco Sahli, Stanford University

PNNL’s Center for Advanced Tech Evaluation Seeks Wider HPC Community Ties

September 21, 2017

Two years ago the Department of Energy established the Center for Advanced Technology Evaluation (CENATE) at Pacific Northwest National Laboratory (PNNL). CENAT Read more…

By John Russell

Exascale Computing Project Names Doug Kothe as Director

September 20, 2017

The Department of Energy’s Exascale Computing Project (ECP) has named Doug Kothe as its new director effective October 1. He replaces Paul Messina, who is stepping down after two years to return to Argonne National Laboratory. Kothe is a 32-year veteran of DOE’s National Laboratory System. Read more…

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blu Read more…

By Merle Giles

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

GlobalFoundries: 7nm Chips Coming in 2018, EUV in 2019

June 13, 2017

GlobalFoundries has formally announced that its 7nm technology is ready for customer engagement with product tape outs expected for the first half of 2018. The Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

  • arrow
  • Click Here for More Headlines
  • arrow
Share This