Researchers across scientific disciplines are clamoring for exascale systems that can handle bigger, more complex models. When it comes to the climate modeling and weather forecasting business, researchers are finding promise in using new HPC architectures, such as the one used in the Green Flash cluster, to get closer to the exascale goal.
Green Flash is a specialized supercomputer designed to showcase a way to perform more detailed climate modeling. The system uses customized Tensilica-based processors, similar to those found in iPhones, and communication-minimizing algorithms that cut down on the movement of data, to model the movement of clouds around the earth at a higher resolution than was previously possible, without consuming huge amounts of electricity.
The computational and power-consumption problems that had to be overcome to get the higher resolution climate models are clearly explained in this Berkeley Science Review article. In short, scientists are eager to improve upon the current cloud climate modeling systems, which have a resolution of 200 km. A model that’s composed of a grid with data points that are 1 km to 2 km apart would be much more useful, and would result in much more accurate weather forecasts and a greater understanding of the science behind climate modeling.
However, the computational demands involved in high resolution climate modeling don’t increase linearly–they increase geometrically. Not only is the mesh in the grid much more compact, but more “time steps” are required to keep the equations from falling apart. Dr. Michael Wehner, a researcher at LBL, ran the numbers and found that the 2 km model requires 1 million times as many FLOPs as the 200 km model.
Translated into real world figures, such a high-resolution system would require 27 petaflops of sustained capacity, and a peak capacity of 200 petaflops, according to the BSR story. This theoretical system–bigger than anything ever actually built–would require 50 to 200 megawatts of power to run, which is comparable to the electric demands of an entire city. Its power bill would be hundreds of millions of dollars a year. Clearly, a different approach was needed.
Instead of building a general purpose supercomputer, Wehner and others with LBL, UC Berkeley’s Electrical Engineering and Computer Science Department, and the RAMP (Research Accelerator for Multiprocessors) project decided to try a customized system, where hardware and software are designed together.
The design came together with Green Flash, which combines energy-efficient Tensilica processors with communication-minimizing algorithms. Currently, Green Flash, which has been called “the iPod supercomputer,” is running 4 km models. The combination is predicted to yield the capability to run the 2 km cloud model on a system with only 4 megawatts of power, which is 12 to 40 times smaller than a conventional supercomputer would need to run the same model.
This approach does have its downsides, however. Because Green Flash was designed specifically for climate modeling workloads, it won’t work with other types of HPC applications, such as analyzing genes or financial transactions. (In fact, it doesn’t even work with all the different climate modeling systems that are in use.) It’s not nearly as flexible as other supercomputers in the LBL stable, such as Hopper, BSR notes in its story.
However, when one considers the energy wall that’s imposed when taking the generic approach, the custom-built approach to designing the next generation of supercomputers to solve specific HPC problems may be part of solution for the exascale equation.