Swiss Researchers Propose ‘GreenIT’ Methodology for HPC

By Nicole Hemsoth

November 19, 2010

The latest Green500 list announced this week at SC10 is once again shining the spotlight on the energy efficiency of the world’s top supercomputers. But the path to more efficient high performance computing goes beyond this simple benchmark-based approach. Ralf Gruber and Vincent Keller, both from École Polytechnique Fédérale de Lausanne (EPFL), describe a holistic approach to more energy-efficient HPC operations in their book, [email protected] HPCwire contributor Steve Conway interviewed the Swiss duo about their ideas, including a new benchmark.

HPCwire: Why did you write a book on “green” high performance computing methods?

Ralf Gruber: There was no theory on how to couple application needs to hardware offers. In the book, we try to set up a theory by defining parameters to characterize resources and applications. This parameterization is then used to develop models to predict if a computer architecture is well suited for an application or not. These models can also be used to detect poor application implementations, to redesign computer architectures, to detect resources that should be switched off, to run on the same resource two or more complementary applications that optimally use the different parts, or to simply recognize the best-suited machine for a given application.

Vincent Keller: Finally, we used these models to interact with the DVS-able processors in order to tune the frequency of the processor. Measurements on a Nehalem already show overall energy reductions of up to 30 percent for main memory access-dominated applications.

HPCwire: You make recommendations in several areas. What are your application-oriented recommendations?

Gruber: Together with an efficient monitoring, the parameterization of the applications leads to models that are used to understand how well an application runs on different computers. The models — for instance the one on the complexity — also help to detect an unexpected behavior that can then be corrected.

Keller: We also make a recommendation to the vendors and the main HPC actors to create a new application-oriented REAL500 list, based on the observation that the current TOP500 list is largely used for marketing purposes and does not reflect the real applications anymore. At a certain point, it is counter-productive for making better usage of large-scale architectures.

HPCwire: How about your recommendations for system software?

Keller: System software should be able to easily measure the behavior of an application. Also, it should then be possible to act on the hardware parts during execution, such as switching off unused cores, reducing resource frequencies, or disabling unused main memory.

HPCwire: Sum up your recommendations for reducing energy use.

Keller: Energy reduction can be achieved through improving the efficiency of the application, through frequency reductions — four times more resources running at four times smaller frequency consume four times less energy — and by switching off unused parts, or by choosing a better-suited computer for the application to run on.

HPCwire: You mention that the TOP500 list and the derivative Green500 list are based on the narrow High Performance Linpack benchmark. What do you propose as an alternative to better measure energy efficiency?

Gruber: The parameterization and the models described in the book enable people to predict the behavior of an application on a different hardware platform, if one knows some timings of a few characteristic test applications. Thus, it would be perfect to perform measurements of processor, main memory, and network test cases for which the application-oriented parameters are exactly known.

Keller: Typical test cases are applications such as matrix*matrix-dominated, HPL-like codes, matrix*vector-dominated codes that are iteratively solved, Poisson problems described by sparse matrices, multicast communications dominated CP2K codes, and point-to-point-dominated, SpecuLOOS codes. Then, it would be possible to predict the behavior of your own application on the new hardware.

Keller: As a consequence, the new REAL500 classification would not be based on a single value, as is the case with today’s TOP500, but on several metrics, including pure CPU performance, the ratio of CPU performance to memory bandwidth, multicast communication performance, point-to-point communication performance, and network latency. At this point, knowing the applications ecosystem, it is possible to choose the right machine, or a set of the right machines to fit to the application component needs and achieve the greenest, most performant results.

HPCwire: Worldwide studies by IDC and Avetec showed that 69 percent of HPC datacenters do not actively measure energy efficiency today, and 80 percent have no strong mandate to improve energy efficiency. What will change this situation?

Keller: As a first comment, if 69 percent of the centers do not measure the energy, it is understandable that 80 percent of them have no mandate to improve energy efficiency. By providing them the right tools to show that it is possible to reduce the energy bill for hardware and cooling with no loss of computational performance, we are convinced that their financial departments will consider the question as important and act. The situation is already on a wind of change. It is not uncommon to see a datacenter that would like to extend its computing capacity but cannot because of a power supply limitation. The demand in computing power increases, but energy consumption should not.

HPCwire: John Gustafson of Intel Labs says that by 2018, we’ll have an exaflop computer and the memory bandwidth will consume half of the power. How important is it to create new strategies to minimize data movement?

Gruber: Main memory is already the big issue now. When we reduce the frequency of the processor during execution, for instance on a Nehalem, the main memory consumes most of the energy, and this happens not only in 2018. The major problem is the small parallelism in data access. We should highly increase access parallelism by increasing the number of memory banks as in the old vector machines, and by increasing the bit stream. Then, it will be possible to decrease the frequency and the energy consumption.

HPCwire: Is cloud computing more or less energy-efficient than in-house computing?

Keller: Cloud computing is a buzzword. It is little more than grid computing plus a business model, and the latest strategy of scientists to raise funding for academic research. Grid computing was a big dream and a big failure. Why? Because the question of “who pays?” was never taken into account.

Cloud computing is different in that sense. A provider gives a certain quality of service: “I will provide you 1 gigaflops with a memory bandwidth of 1 GB/second for $1/hour.” Thanks to virtualization, the cloud computing providers, such as Amazon, Salesforce or Google, can offer computing power to their customers at a lower price, with multiple customers on the same hardware. We’ve known since the mainframe era that shared resources are cheaper and more energy-efficient than distributed resources that are left idle part of the time. In that specific sense of re-implementing old concepts, cloud computing could be more energy-efficient than in-house computing.

Last but not least, the data transfer from the customer to the provider and back is not taken into account in the final bill. It is more or less like living in Geneva: Swiss people know that food is less expensive in France than in Switzerland, but they have to take into account the round trip. How much food would make it less expensive, with the transport costs included, to buy in France rather than in Switzerland?

HPCwire: What tips do you have for choosing a new supercomputer that will use energy wisely?

Keller: In a recent publication [1], we propose a GPU-based supercomputer that uses only a few cores, with the others switched off, and runs these at a four times lower frequency, This would reduce energy consumption by a factor of 16. To compensate for the performance reduction, four times more units must then be purchased. Together with the fact that the amount of main memory per processor can be reduced by a factor of four, the overall energy consumption can be estimated to drop by an overall factor of nine, and this by simply downgrading the resources.

Gruber: We also realized that the overall costs over four years could be cheaper for the downgraded hardware. In addition, decreasing the temperature by about 30° C increases the MTBF by a factor of 8. This is another important issue for exaflop machines. We were told that multiplying the number of functional units by four is unacceptable. We believe that running with one million of cores or with four million of cores is not an issue, but consuming nine times less energy, and increasing MTBF by eight are very important issues. The question we have to ask the hardware companies is clear: Will they agree to downgrade their computers to increase energy efficiency?

[1] Keller, V. and Gruber R. One Joule per GFlop for BLAS2 Now!, ICNAAM 2010 proceedings, pp. 1321-1324, ISBN: 978-0-7354-0834-0

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: Wind Farms, Gravitational Lenses, Web Portals & More

February 19, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from the nanoscale to the astronomic, from calculating quantum effe Read more…

By Ken Strandberg

What Will IBM’s AI Debater Learn from Its Loss?

February 14, 2019

The utility of IBM’s latest man-versus-machine gambit is debatable. At the very least its Project Debater got us thinking about the potential uses of artificial intelligence as a way of helping humans sift through al Read more…

By George Leopold

HPE Extreme Performance Solutions

HPE Systems With Intel Omni-Path: Architected for Value and Accessible High-Performance Computing

Today’s high-performance computing (HPC) and artificial intelligence (AI) users value high performing clusters. And the higher the performance that their system can deliver, the better. Read more…

IBM Accelerated Insights

Medical Research Powered by Data

“We’re all the same, but we’re unique as well. In that uniqueness lies all of the answers….”

  • Mark Tykocinski, MD, Provost, Executive Vice President for Academic Affairs, Thomas Jefferson University

Getting the answers to what causes some people to develop diseases and not others is driving the groundbreaking medical research being conducted by the Computational Medicine Center at Thomas Jefferson University in Philadelphia. Read more…

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst of bankruptcy proceedings. According to Dutch news site Drimb Read more…

By Tiffany Trader

Insights from Optimized Codes on Cineca’s Marconi

February 15, 2019

What can you do with 381,392 CPU cores? For Cineca, it means enabling computational scientists to expand a large part of the world’s body of knowledge from th Read more…

By Ken Strandberg

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

UC Berkeley Paper Heralds Rise of Serverless Computing in the Cloud – Do You Agree?

February 13, 2019

Almost exactly ten years to the day from publishing of their widely-read, seminal paper on cloud computing, UC Berkeley researchers have issued another ambitious examination of cloud computing - Cloud Programming Simplified: A Berkeley View on Serverless Computing. The new work heralds the rise of ‘serverless computing’ as the next dominant phase of cloud computing. Read more…

By John Russell

Iowa ‘Grows Its Own’ to Fill the HPC Workforce Pipeline

February 13, 2019

The global workforce that supports advanced computing, scientific software and high-speed research networks is relatively small when you stop to consider the magnitude of the transformative discoveries it empowers. Technical conferences provide a forum where specialists convene to learn about the latest innovations and schedule face-time with colleagues from other institutions. Read more…

By Elizabeth Leake, STEM-Trek

Trump Signs Executive Order Launching U.S. AI Initiative

February 11, 2019

U.S. President Donald Trump issued an Executive Order (EO) today launching a U.S Artificial Intelligence Initiative. The new initiative - Maintaining American L Read more…

By John Russell

Celebrating Women in Science: Meet Four Women Leading the Way in HPC

February 11, 2019

One only needs to look around at virtually any CS/tech conference to realize that women are underrepresented, and that holds true of HPC. SC hosts over 13,000 H Read more…

By AJ Lauer

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Assessing Government Shutdown’s Impact on HPC

February 6, 2019

After a 35-day federal government shutdown, the longest in U.S. history, government agencies are taking stock of the damage -- and girding for a potential secon Read more…

By Tiffany Trader

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This