The Greening of HPC

By Michael Feldman

June 26, 2008

One of the final panel sessions at the International Supercomputing Conference (ISC) last week focused on “green” supercomputing, a term used to encompass both power efficiency and environmental responsibility. Barely an issue just a couple of years ago, today every IT vendor, HPC or otherwise, is selling green computing in one form or another. With overall IT power consumption expected to grow around 15 percent per year and the pressure on datacenters to accommodate ever-larger systems, energy conserving strategies have become a huge issue in HPC and the IT industry, in general.

Panel chair Horst Simon (LBNL) started the session by noting that even small energy cutbacks can yield large savings over a long period of time. He pointed out the very modest energy saving measures instituted in the U.S. in the mid ’70s in response to the oil embargo netted $700 billion of savings over the ensuing 30 years. Although worldwide, IT infrastructure consumes only about 0.8% of energy used, the cost totaled $7.2 billion in 2005. Given the double-digit growth rate of IT power consumption, steps taken today could save billions of dollars over the next decade.

Simon cited a Google report that determined that datacenter energy costs are starting to dominate lifecycle costs. According to this study, energy costs may eclipse acquisition costs for low-end servers after just two years of service. With that in mind, Google and Microsoft are building huge datacenters (tens of megawatts) along the Columbia River to take advantage of cheap hyro-electricity and use the river water for cooling. Ten or fifteen lesser-known IT companies are building similar facilities elsewhere in anticipation of future demand for ultra-scale datacenters. “Clearly the industry is changing and something is going on with power and computing,” Simon noted.

Green Destiny

Virginia Tech’s Wu-chun Feng was interested in green computing before it became fashionable. In 2002, Feng and his colleagues became interested in developing an energy-efficient HPC system that required minimum cooling. The effort was born out of necessity. Virginia Tech’s datacenter wasn’t much more than a warehouse. It had little access to cooling, with temperatures in mid-summer rising to 85-90F. Both power and space were limited. The goal was to develop a highly reliable machine that could operate under these harsh conditions; performance was secondary.

In response, Feng’s team developed a 240-node cluster, called Green Destiny, based on the highly energy-efficient Transmeta processor (1 GHz TM5800). The entire system used 3.2 kilowatts. The Transmeta chips weren’t the fastest chip ever conceived. Green Destiny topped out at 101 gigaflops on Linpack. Even in 2002 that would have placed it in the bottom half of the TOP500. Feng recalled they took some heat about the machine’s low performance, causing one colleague to joke that it “runs just as fast when it’s unplugged.” But the project was a success. In the two-year life of the system, there was no unscheduled downtime.

Other than interest in the exotic Transmeta hardware, Feng’s work got little attention. In 2002, HPC was about performance at any cost. Oil was $25 a barrel and not many people were worried about power and cooling costs yet. The conventional wisdom was that Moore’s Law would solve everything. “It’s interesting to see in five and half years how things have changed,” said Feng.

Computing per Watt Has Been Solved

HPC Veteran John Gustafson broke with conventional wisdom, declaring that the computing part of our machines is already highly energy efficient. He noted that the latest ClearSpeed gear delivers 4 gigaflops/watt, and Intel will soon achieve that in mainstream processors. According to Gustafson, the computational elements of a modern HPC system consume just a small fraction of the total power.

He illustrated this by pointing out that a typical Linpack run for a top 10 system uses the equivalent of 20 barrels of oil. The floating point calculation part uses just 0.1 barrel. The rest is used moving data from one point to the other (although he admitted that includes on-chip data movement as well). With that in mind, Gustafson said that industry should now focus on the energy efficiency of data communication. He wants to replace flops with a new metric: “byps” or bytes per second. According to him, measuring byps per watt will give people a much better understanding of the energy efficiency of systems.

Wasteful power consumption in data communications is relatively easy to find. According to an IEEE Spectrum report, in 2005 it was estimated that all the NICs in the U.S. consumed 5.3 terawatt-hours of energy. Since all of IT consumes 200 terawatt-hours, the NIC devices alone represent 2.6 percent of the power used by all the machines. Furthermore, since communication tends to be bursty, about 95 percent of this energy is wasted. Most of the time, NIC is chewing up watt-hours waiting for the next data deluge.

Gustafson maintains that computing is not going green to reduce energy use or reduce the carbon footprint, but to get more performance within a fixed power budget. “This is the inherent nature of HPC,” said Gustafson. Improving performance per watt will go to increasing performance, not reducing watts. After all, he said, “HPC users are not tree huggers.”

Green Computing by Law

Because of Japan’s limited domestic energy resources and an environmentally conscious populace, green computing is more or less mandated by law in the island nation. Under the Kyoto Treaty, the amount of carbon many government facilities and public universities can emit is regulated. With such stringent limits, datacenters have no choice but to aggressively pursue energy efficiency.

As the technical lead for the TSUBAME supercomputer at the Tokyo Institute of Technology (TiTech), Satoshi Matsuoka has had to deal with this reality for some time. The TSUBAME machine was built with ClearSpeed accelerators on top of conventional Opteron nodes to achieve the high levels of performance with a low power consumption. Currently at 100 teraflops, the system consumes a total of 1.2 megawatts for power and cooling.

Matsuoka explained that as part of TSUBAME’s upgrade path over the next two years, they are tasked to deliver a one petaflop system — a 10-fold increase in performance over the current system. And they have to achieve that with the same power consumption as today’s TSUBAME. That means they will have to exceed the energy efficiency of the IBM Roadrunner, the most energy-efficient supercomputer ever built. One of the technologies TiTech is looking at is GPGPU. The raw double precision performance per watt is not as good as the ClearSpeed boards, but GPUs are very well suited to data bandwidth-intensive applications, like FFT codes. And even with today’s technology, the energy efficiency of GPUs is about five times better than Blue Gene.

On the national scale, Matsuoka said the Japanese government is starting a five-year project in ultra-low-power HPC. Researchers will look at multicore processors, accelerators, next-generation memory technology, advanced networks, better cooling technology, facility improvement, zero emission power sources, and low-power algorithms. The project’s goal is to develop basic technologies that will enable a 1,000-fold increase in energy efficiency over the next decade.

Integrated Facilities Design

Dr. Franz-Josef Pfreundt, who heads IT at Fraunhofer-ITWM, thinks the real discussion of green computing needs to focus on energy costs. He noted that an environmentally-friendly solution could be provided using suitable biofuels or solar energy technology, but costs may make such a model impractical.

Pfreundt asserted that currently energy costs represent only a few percent per year of the initial acquisition cost of a supercomputer. That works out to only about 10 to 15 percent of the machine’s cost over its three-year lifetime. At ITWM they’ve achieved that ratio for their latest 2.1 million euro supercomputer, even at the rate of 0.1 euros/KW-hour. He also argued that extending the useful life of the hardware is another cost-saving strategy.

Pfreundt believes that to optimize power use, people need to consider the efficiency of the entire computing infrastructure. Part of the problem, he said, is that the energy budget for the system is divorced from the acquisition cost. If they were wrapped together as part of the system procurement, buyers would naturally pay more attention to power consumption. At ITWM, they’ve managed to achieve a relatively cost-efficient setup by re-using some of the waste heat and selecting energy efficient hardware.

For example, by taking advantage the temperate German climate, they use outside air for cooling — something that would not be possible in summer over much of the U.S. and Asia. They also recycle the warmed 86F air to heat local greenhouses. Pfreundt thinks that if they could extract more heat from the computer directly, that is, water cool them, the waste heat would have even more value, since the water could be sold for community heating.

ITWM recently purchased a 70-blade IBM QS22 BladeCenter cluster based on the new Cell processors (PowerXCell 8i), which are the same blades that went into the Roadrunner petaflop machine. At ITWM, they’ve demonstrated 488 megaflops per watt on Linpack and think they can achieve 600 megaflops per watt, which would earn it the top spot on the Green500 list. While the current Cell processors provide 1.6 gigaflops/watt, Pfreundt projected that within three years the industry will have chips that deliver 10 gigaflops/watt. With that level of efficiency, Pfreundt said he will be able to get a petaflop into his facility.

Learning From the Embedded Space

Berkeley Lab’s John Shalf observed that because of the industry’s typical two to four year design cycle, not much has occurred that fundamentally addresses the power issues in computing, although he thinks we’re starting to see the beginnings of some promising approaches. Accelerators like the Cell processor and GPUs have huge potential, especially with the use of properly tuned codes, but the downside of shuffling data back and forth between host CPU and the accelerator can limit performance on many apps. Shalf sees discrete accelerators as a stepping stone on the path to integrated manycore designs.

According to him, the goal of green computing is to minimize the power consumed for the amount of work performed. This has been the driving force behind embedded computing for some time and is the reason Shalf believes that the current power crisis is converging the embedded and high performance computing spaces. In the embedded world, you start with the application and design the system around it. According to Shalf, that kind of tight coupling between hardware and software is what enables exceptional power efficiency.

“That doesn’t mean it’s special-purpose and only works for one application target,” he explained. “It means that you throw away everything that you don’t need for a range of problems.”

That translates into much less complex microprocessors than is typical of today’s standard x86 or Power chips, or even the new Intel Atom. By simplifying the logic, you can design much smaller chips, with many more cores, shorter instruction pipelines, and less power leakage. For example, GPUs don’t have TLBs since they’re not swapping applications in and out of memory. “Most of what you have on these modern CPUs, you don’t need for science,” said Shalf.

At Berkeley, Shalf and others are currently working on “Green Flash,” a research project to define a new class of supercomputers for modeling climate conditions and understanding climate change. They chose the application because it encompasses a wide range of algorithms that are applicable to many different science codes. The work is being done in collaboration with Tensilica, a company that tailors highly energy-efficient embedded processors for platforms like MP3 players and network routers. One implementation, the XTensa microprocessor, draws just 0.09 watts at 600MHz and achieves 100 times better floating point performance than the Intel Core2 architecture.

Using Tensilica’s design tools, a new chip can be developed in 18 months at a cost of $5 to $10 million. When you consider that a leadership class supercomputer is typically priced in the $100 million range and is the end result of a multi-year development cycle, a simple microprocessor design could easily fit into the scope of the project. Shalf believes this may be the commodity model that HPC will need to adopt if it hopes to achieve exascale computing.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

Nvidia Appoints Andy Grant as EMEA Director of Supercomputing, Higher Education, and AI

March 22, 2024

Nvidia recently appointed Andy Grant as Director, Supercomputing, Higher Education, and AI for Europe, the Middle East, and Africa (EMEA). With over 25 years of high-performance computing (HPC) experience, Grant brings a Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire