The Greening of HPC

By Michael Feldman

June 26, 2008

One of the final panel sessions at the International Supercomputing Conference (ISC) last week focused on “green” supercomputing, a term used to encompass both power efficiency and environmental responsibility. Barely an issue just a couple of years ago, today every IT vendor, HPC or otherwise, is selling green computing in one form or another. With overall IT power consumption expected to grow around 15 percent per year and the pressure on datacenters to accommodate ever-larger systems, energy conserving strategies have become a huge issue in HPC and the IT industry, in general.

Panel chair Horst Simon (LBNL) started the session by noting that even small energy cutbacks can yield large savings over a long period of time. He pointed out the very modest energy saving measures instituted in the U.S. in the mid ’70s in response to the oil embargo netted $700 billion of savings over the ensuing 30 years. Although worldwide, IT infrastructure consumes only about 0.8% of energy used, the cost totaled $7.2 billion in 2005. Given the double-digit growth rate of IT power consumption, steps taken today could save billions of dollars over the next decade.

Simon cited a Google report that determined that datacenter energy costs are starting to dominate lifecycle costs. According to this study, energy costs may eclipse acquisition costs for low-end servers after just two years of service. With that in mind, Google and Microsoft are building huge datacenters (tens of megawatts) along the Columbia River to take advantage of cheap hyro-electricity and use the river water for cooling. Ten or fifteen lesser-known IT companies are building similar facilities elsewhere in anticipation of future demand for ultra-scale datacenters. “Clearly the industry is changing and something is going on with power and computing,” Simon noted.

Green Destiny

Virginia Tech’s Wu-chun Feng was interested in green computing before it became fashionable. In 2002, Feng and his colleagues became interested in developing an energy-efficient HPC system that required minimum cooling. The effort was born out of necessity. Virginia Tech’s datacenter wasn’t much more than a warehouse. It had little access to cooling, with temperatures in mid-summer rising to 85-90F. Both power and space were limited. The goal was to develop a highly reliable machine that could operate under these harsh conditions; performance was secondary.

In response, Feng’s team developed a 240-node cluster, called Green Destiny, based on the highly energy-efficient Transmeta processor (1 GHz TM5800). The entire system used 3.2 kilowatts. The Transmeta chips weren’t the fastest chip ever conceived. Green Destiny topped out at 101 gigaflops on Linpack. Even in 2002 that would have placed it in the bottom half of the TOP500. Feng recalled they took some heat about the machine’s low performance, causing one colleague to joke that it “runs just as fast when it’s unplugged.” But the project was a success. In the two-year life of the system, there was no unscheduled downtime.

Other than interest in the exotic Transmeta hardware, Feng’s work got little attention. In 2002, HPC was about performance at any cost. Oil was $25 a barrel and not many people were worried about power and cooling costs yet. The conventional wisdom was that Moore’s Law would solve everything. “It’s interesting to see in five and half years how things have changed,” said Feng.

Computing per Watt Has Been Solved

HPC Veteran John Gustafson broke with conventional wisdom, declaring that the computing part of our machines is already highly energy efficient. He noted that the latest ClearSpeed gear delivers 4 gigaflops/watt, and Intel will soon achieve that in mainstream processors. According to Gustafson, the computational elements of a modern HPC system consume just a small fraction of the total power.

He illustrated this by pointing out that a typical Linpack run for a top 10 system uses the equivalent of 20 barrels of oil. The floating point calculation part uses just 0.1 barrel. The rest is used moving data from one point to the other (although he admitted that includes on-chip data movement as well). With that in mind, Gustafson said that industry should now focus on the energy efficiency of data communication. He wants to replace flops with a new metric: “byps” or bytes per second. According to him, measuring byps per watt will give people a much better understanding of the energy efficiency of systems.

Wasteful power consumption in data communications is relatively easy to find. According to an IEEE Spectrum report, in 2005 it was estimated that all the NICs in the U.S. consumed 5.3 terawatt-hours of energy. Since all of IT consumes 200 terawatt-hours, the NIC devices alone represent 2.6 percent of the power used by all the machines. Furthermore, since communication tends to be bursty, about 95 percent of this energy is wasted. Most of the time, NIC is chewing up watt-hours waiting for the next data deluge.

Gustafson maintains that computing is not going green to reduce energy use or reduce the carbon footprint, but to get more performance within a fixed power budget. “This is the inherent nature of HPC,” said Gustafson. Improving performance per watt will go to increasing performance, not reducing watts. After all, he said, “HPC users are not tree huggers.”

Green Computing by Law

Because of Japan’s limited domestic energy resources and an environmentally conscious populace, green computing is more or less mandated by law in the island nation. Under the Kyoto Treaty, the amount of carbon many government facilities and public universities can emit is regulated. With such stringent limits, datacenters have no choice but to aggressively pursue energy efficiency.

As the technical lead for the TSUBAME supercomputer at the Tokyo Institute of Technology (TiTech), Satoshi Matsuoka has had to deal with this reality for some time. The TSUBAME machine was built with ClearSpeed accelerators on top of conventional Opteron nodes to achieve the high levels of performance with a low power consumption. Currently at 100 teraflops, the system consumes a total of 1.2 megawatts for power and cooling.

Matsuoka explained that as part of TSUBAME’s upgrade path over the next two years, they are tasked to deliver a one petaflop system — a 10-fold increase in performance over the current system. And they have to achieve that with the same power consumption as today’s TSUBAME. That means they will have to exceed the energy efficiency of the IBM Roadrunner, the most energy-efficient supercomputer ever built. One of the technologies TiTech is looking at is GPGPU. The raw double precision performance per watt is not as good as the ClearSpeed boards, but GPUs are very well suited to data bandwidth-intensive applications, like FFT codes. And even with today’s technology, the energy efficiency of GPUs is about five times better than Blue Gene.

On the national scale, Matsuoka said the Japanese government is starting a five-year project in ultra-low-power HPC. Researchers will look at multicore processors, accelerators, next-generation memory technology, advanced networks, better cooling technology, facility improvement, zero emission power sources, and low-power algorithms. The project’s goal is to develop basic technologies that will enable a 1,000-fold increase in energy efficiency over the next decade.

Integrated Facilities Design

Dr. Franz-Josef Pfreundt, who heads IT at Fraunhofer-ITWM, thinks the real discussion of green computing needs to focus on energy costs. He noted that an environmentally-friendly solution could be provided using suitable biofuels or solar energy technology, but costs may make such a model impractical.

Pfreundt asserted that currently energy costs represent only a few percent per year of the initial acquisition cost of a supercomputer. That works out to only about 10 to 15 percent of the machine’s cost over its three-year lifetime. At ITWM they’ve achieved that ratio for their latest 2.1 million euro supercomputer, even at the rate of 0.1 euros/KW-hour. He also argued that extending the useful life of the hardware is another cost-saving strategy.

Pfreundt believes that to optimize power use, people need to consider the efficiency of the entire computing infrastructure. Part of the problem, he said, is that the energy budget for the system is divorced from the acquisition cost. If they were wrapped together as part of the system procurement, buyers would naturally pay more attention to power consumption. At ITWM, they’ve managed to achieve a relatively cost-efficient setup by re-using some of the waste heat and selecting energy efficient hardware.

For example, by taking advantage the temperate German climate, they use outside air for cooling — something that would not be possible in summer over much of the U.S. and Asia. They also recycle the warmed 86F air to heat local greenhouses. Pfreundt thinks that if they could extract more heat from the computer directly, that is, water cool them, the waste heat would have even more value, since the water could be sold for community heating.

ITWM recently purchased a 70-blade IBM QS22 BladeCenter cluster based on the new Cell processors (PowerXCell 8i), which are the same blades that went into the Roadrunner petaflop machine. At ITWM, they’ve demonstrated 488 megaflops per watt on Linpack and think they can achieve 600 megaflops per watt, which would earn it the top spot on the Green500 list. While the current Cell processors provide 1.6 gigaflops/watt, Pfreundt projected that within three years the industry will have chips that deliver 10 gigaflops/watt. With that level of efficiency, Pfreundt said he will be able to get a petaflop into his facility.

Learning From the Embedded Space

Berkeley Lab’s John Shalf observed that because of the industry’s typical two to four year design cycle, not much has occurred that fundamentally addresses the power issues in computing, although he thinks we’re starting to see the beginnings of some promising approaches. Accelerators like the Cell processor and GPUs have huge potential, especially with the use of properly tuned codes, but the downside of shuffling data back and forth between host CPU and the accelerator can limit performance on many apps. Shalf sees discrete accelerators as a stepping stone on the path to integrated manycore designs.

According to him, the goal of green computing is to minimize the power consumed for the amount of work performed. This has been the driving force behind embedded computing for some time and is the reason Shalf believes that the current power crisis is converging the embedded and high performance computing spaces. In the embedded world, you start with the application and design the system around it. According to Shalf, that kind of tight coupling between hardware and software is what enables exceptional power efficiency.

“That doesn’t mean it’s special-purpose and only works for one application target,” he explained. “It means that you throw away everything that you don’t need for a range of problems.”

That translates into much less complex microprocessors than is typical of today’s standard x86 or Power chips, or even the new Intel Atom. By simplifying the logic, you can design much smaller chips, with many more cores, shorter instruction pipelines, and less power leakage. For example, GPUs don’t have TLBs since they’re not swapping applications in and out of memory. “Most of what you have on these modern CPUs, you don’t need for science,” said Shalf.

At Berkeley, Shalf and others are currently working on “Green Flash,” a research project to define a new class of supercomputers for modeling climate conditions and understanding climate change. They chose the application because it encompasses a wide range of algorithms that are applicable to many different science codes. The work is being done in collaboration with Tensilica, a company that tailors highly energy-efficient embedded processors for platforms like MP3 players and network routers. One implementation, the XTensa microprocessor, draws just 0.09 watts at 600MHz and achieves 100 times better floating point performance than the Intel Core2 architecture.

Using Tensilica’s design tools, a new chip can be developed in 18 months at a cost of $5 to $10 million. When you consider that a leadership class supercomputer is typically priced in the $100 million range and is the end result of a multi-year development cycle, a simple microprocessor design could easily fit into the scope of the project. Shalf believes this may be the commodity model that HPC will need to adopt if it hopes to achieve exascale computing.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Quantinuum Reports 99.9% 2-Qubit Gate Fidelity, Caps Eventful 2 Months

April 16, 2024

March and April have been good months for Quantinuum, which today released a blog announcing the ion trap quantum computer specialist has achieved a 99.9% (three nines) two-qubit gate fidelity on its H1 system. The lates Read more…

Mystery Solved: Intel’s Former HPC Chief Now Running Software Engineering Group 

April 15, 2024

Last year, Jeff McVeigh, Intel's readily available leader of the high-performance computing group, suddenly went silent, with no interviews granted or appearances at press conferences.  It led to questions -- what's Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Institute for Human-Centered AI (HAI) put out a yearly report to t Read more…

Crossing the Quantum Threshold: The Path to 10,000 Qubits

April 15, 2024

Editor’s Note: Why do qubit count and quality matter? What’s the difference between physical qubits and logical qubits? Quantum computer vendors toss these terms and numbers around as indicators of the strengths of t Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips are available off the shelf, a concern raised at many recent Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Computational Chemistry Needs To Be Sustainable, Too

April 8, 2024

A diverse group of computational chemists is encouraging the research community to embrace a sustainable software ecosystem. That's the message behind a recent Read more…

Hyperion Research: Eleven HPC Predictions for 2024

April 4, 2024

HPCwire is happy to announce a new series with Hyperion Research  - a fact-based market research firm focusing on the HPC market. In addition to providing mark Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Leading Solution Providers

Contributors

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire