HPC Under the Covers: Linpack, Exascale & the Top500

By Tiffany Trader

June 28, 2018

HPCers can get painted as a monolithic bunch by outsiders, but internecine disagreements abound over the HPCest of HPC jargon, as was evident at ISC this week.

Ask four HPC leaders about Linpack’s relevance, get four distinct answers — and that’s just what happened at the Monday Top500 panel. During the panel, moderator Horst Simon (Top500 co-author and deputy lab director of Lawrence Berkeley National Lab) asked panelists Yutong Lu, Steve Conway, Thomas Schulthess and Steve Scott about the limitations surrounding Linpack and what needs to be changed at the Top500.

Yutong Lu, National Supercomputing Center in Guangzhou, China, and ISC 2019 Program Chair:

“I think the performance for the supercomputer will be the eternal target because people will always ask and care about how much faster the supercomputer could run, and what’s the highest performance that can be reached. But I think that the metrics could be changed. If you look back 20-or-more years, the computational power was the bottleneck of the full system, so the HPL was a good benchmark at that time and continued to be over the past 20 years. But now we all note that the data access and ability have become the bottleneck of the system, so we obviously need some new benchmarks to measure that part. That will be something we need to change.”

Steve Conway, COO of Hyperion Research:

“The Top500 is great as a census of elements affecting large supercomputers over time, but it’s often been interpreted–as it was never intended to be–as a predictor of performance over a spectrum of HPC code. One thing that could be valuable is a warning like on the cigarette label that says ‘this could be fatal if you use it as a predictor.’ But I was very pleased to see the attention paid to HPCG and the Green500 and the inclusion of those lists. My only recommendation would be to give those equal promotional strength.”

Thomas Schulthess, Director of the Swiss National Supercomputing Centre (CSCS):

ISC 2018 Panel: Top500’s Relevance after 25 Years

“I have quite a different opinion. The relevance today is clearly from a political point of view and a funding point of view. From an application performance point of view the story is very different. It even comes to the point where the Top500 may actually be a distraction if you have certain goals on the application side. And let me give you an example: there is the TaihuLight system in Wuxi, and the Piz Daint system that I have a lot of authority over. When you look at the flops, TaihuLight is on top with a factor of five difference. When you look at how the benchmark from the weather and climate community performs–the baroclinic instability test–then the order is reversed and the performance of Piz Daint is about two to three times faster.

“We’ve been thinking about this quite a bit…and it turns out that flops is not a good metric to design systems against. It may be good to track and look back retroactively, but not looking into the future. The conclusion is that we need a metric that relates to a scientific goal: so simulated years per day for the given size of the problem. And it is very important that the size of the problem factors in. Remember in the Top500 the HPL we do the size of the problem to maximize this metric of flops. We can do the same with HPCG. It turns out from an engineering point of view that this is not good. If you’re paid to do something, you’re not going to change your target just to maximize some number. That’s a really bad idea.

“We need to set goals. In weather and climate I think we have very clear goals that everybody can relate to, and I wish that the scientific community could come together behind a few goals rather than everybody wanting their own goal to be the metric. So not just some performance metric, but the size of the problem needs to be set, and can be varied over time. But we need to compare apples to apples, and not apples to oranges. And the last point that is really missing in the Top500 is the algorithmic or the method side. Changing algorithms in the history of computing is just as important as changing architectures.”

Steve Scott, Cray SVP and Chief Technology Officer:

“From a scientific perspective I couldn’t agree with you more. From a practical perspective I can’t agree with you at all.

“I would love to see simulated years per day as a much more interesting and useful metric, but there’s no way that you could do that. And you can’t really change the metric that the list uses because it sort of invalidates that historical record aspect. So we have to count on people that are actually doing these procurements and fielding these big systems to be sophisticated people who understand what’s really important and that Linpack is not that thing; and I absolutely think that the Top500–despite all of the good that it’s done–has caused some bad behavior. People have made decisions to get to a higher ranking on that list. And then there’ve been other people who have said ‘I’m going to buy a supercomputer and I’m not even going to put it on the list, because I don’t endorse the metric.’ I think the reality is that you’re not going to be able to change the Top500 benchmark. I like the idea of augmenting it with some things, and there’ve been some attempts with HPCG and the HPCC benchmarks. So we can augment it; I don’t think we can change it.

“I think that the HPL performance is becoming more and more disjointed from real application performance as we go forward, and memory bandwidth and interconnects and other things matter a lot more. Architectural aspects matter. As we get closer to the end of the CMOS era and we may change the way we do computing or go to completely different architectures, it may become even more strained to the point where we have to do something. But in the meantime I’m not sure there’s a whole lot we can do other that continue on the current path.”

Is it Exascale?

At ISC and on #HPC Twitter, discussion has also turned to the “true meaning” of exascale; take this tweet thread for example:

The discussion was further unpacked in this fun exchange from the ISC Analyst Crossfire put on by Intersect360 Research CEO Addison Snell with panelists Depei Qian (Sun Yat-Sen University & Beihang University), Stephan Schenk (BASF SE), Alex Bouzari (DDN), and Ian Colle (HPC at Amazon Web Services).

“Exascale is a term that’s driving me nuts because it has no exact definition,” said Snell, who proceeded to proffer variations of potential exascale definitions to get panelists’ quick takes.

The panelists all agreed that “exa-levels of something non-computational, like an exabyte of storage under one namespace or if you could magically have an exabit/sec of bandwidth” were not exascale, with one panelist offering that “exascale is whatever gets politicians to fund industry.”

The concept of whether 10 to the 18th flops per second at reduced/mixed precision should be called exascale drew one yes, another reference to funding, and consensus from the other panelists, the moderator and yours truly that that was moving the goalposts.

As for whether 10 to the 18th flops theoretical peak with no Linpack or other benchmark or application gets you to exascale, the panelists were unanimous in that it does not, with a comment that “the only thing Linpack does is it gets funding from politicians,” and another that “if our focus is on doing real work, no.”

In the final scenario, Snell asked whether exaflops for a loosely coupled non-HPC application like SETI@home counts as exascale. That drew three no’s and another nod to the market opportunity.

HPC Secrets

The open secrets of HPC are in the crosshairs this week, as illustrated by Andrew Jones’ article published by our friends at the Top500 News.

The increased prevalence of IT/Web-scale systems (close to half the list now) means it’s not in verity a list of 500 supercomputers or HPC clusters. But it was so-called list stuffing via duplicate systems (or large deployments parceled so as to optimize system share) that came to wider attention this week when Lenovo claimed 117 of the 500 machines, becoming the largest Top500 provider as measured by number of systems. It needs to be said that Lenovo didn’t invent the practice–but they have mastered it (the company lists 56 duplicate entries). It should also be said that they have not, to our knowledge, broken any rules.

A search through the annals of the list shows duplicate serial entries exist going back to at least 2010. The practice slowed down after Lenovo purchased IBM’s x86 business in 2014 and ramped up again a year later as Lenovo (and other vendors) figured out how to amplify their list presence, both through system slicing or increased benchmarking of Web/IT machines. The effect shows up as a dip in 10G Ethernet (rise in InfiniBand) and the subsequent climb of 10G Ethernet in that timeframe.

Source: Mellanox June 2018 Top500 analysis slidies

My read is that there was a collectively accepted threshold at which cloud/IT systems and creative system splicing were tolerated, and now that line may be breached. How or whether the problem will be addressed is not clear to me. It’s not as simple as removing anonymous submissions, since anonymity is requisite for industrial representation (as Jones pointed out). Hundreds of NDA site visits obviously aren’t feasible. Top500 Co-author Eric Strohmeier indicated in the Top500 press briefing on Monday that his group knows “reasonably well” who is using the anonymously listed systems; but what is more challenging is tracking down whether a submission is really configured the way it is claimed. Due to some game-playing in the past, system configurations must now be frozen, so that if systems are parceled up into small increments to get a high system count, those systems cannot then be reconfigured into a larger system to keep from falling off.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Royalty-free stock illustration ID: 1675260034

Solving Heterogeneous Programming Challenges with SYCL

December 8, 2021

In the first of a series of guest posts on heterogenous computing, James Reinders, who returned to Intel last year after a short "retirement," considers how SYCL will contribute to a heterogeneous future for C++. Reinde Read more…

Quantinuum Debuts Quantum-based Cryptographic Key Service – Is this Quantum Advantage?

December 7, 2021

Quantinuum – the newly-named company resulting from the merger of Honeywell’s quantum computing division and UK-based Cambridge Quantum – today launched Quantum Origin, a service to deliver “completely unpredicta Read more…

SC21 Was Unlike Any Other — Was That a Good Thing?

December 3, 2021

For a long time, the promised in-person SC21 seemed like an impossible fever dream, the assurances of a prominent physical component persisting across years of canceled conferences, including two virtual ISCs and the virtual SC20. With the advent of the Delta variant, Covid surges in St. Louis and contention over vaccine requirements... Read more…

The Green500’s Crystal Anniversary Sees MN-3 Crystallize Its Winning Streak

December 2, 2021

“This is the 30th Green500,” said Wu Feng, custodian of the Green500 list, at the list’s SC21 birds-of-a-feather session. “You could say 15 years of Green500, which makes it, I guess, the crystal anniversary.” Indeed, HPCwire marked the 15th anniversary of the Green500 – which ranks supercomputers by flops-per-watt, rather than just by flops – earlier this year with... Read more…

AWS Arm-based Graviton3 Instances Now in Preview

December 1, 2021

Three years after unveiling the first generation of its AWS Graviton chip-powered instances in 2018, Amazon Web Services announced that the third generation of the processors – the AWS Graviton3 – will power all-new Amazon Elastic Compute 2 (EC2) C7g instances that are now available in preview. Debuting at the AWS re:Invent 2021... Read more…

AWS Solution Channel

Introducing AWS HPC Connector for NICE EnginFrame

HPC customers regularly tell us about their excitement when they’re starting to use the cloud for the first time. In conversations, we always want to dig a bit deeper to find out how we can improve those initial experiences and deliver on the potential they see. Read more…

Nvidia Dominates Latest MLPerf Results but Competitors Start Speaking Up

December 1, 2021

MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating. That said, a few other AI accelerator companies participated and, one of them, Graphcore, even held a separ Read more…

Royalty-free stock illustration ID: 1675260034

Solving Heterogeneous Programming Challenges with SYCL

December 8, 2021

In the first of a series of guest posts on heterogenous computing, James Reinders, who returned to Intel last year after a short "retirement," considers how SYC Read more…

Quantinuum Debuts Quantum-based Cryptographic Key Service – Is this Quantum Advantage?

December 7, 2021

Quantinuum – the newly-named company resulting from the merger of Honeywell’s quantum computing division and UK-based Cambridge Quantum – today launched Q Read more…

SC21 Was Unlike Any Other — Was That a Good Thing?

December 3, 2021

For a long time, the promised in-person SC21 seemed like an impossible fever dream, the assurances of a prominent physical component persisting across years of canceled conferences, including two virtual ISCs and the virtual SC20. With the advent of the Delta variant, Covid surges in St. Louis and contention over vaccine requirements... Read more…

The Green500’s Crystal Anniversary Sees MN-3 Crystallize Its Winning Streak

December 2, 2021

“This is the 30th Green500,” said Wu Feng, custodian of the Green500 list, at the list’s SC21 birds-of-a-feather session. “You could say 15 years of Green500, which makes it, I guess, the crystal anniversary.” Indeed, HPCwire marked the 15th anniversary of the Green500 – which ranks supercomputers by flops-per-watt, rather than just by flops – earlier this year with... Read more…

Nvidia Dominates Latest MLPerf Results but Competitors Start Speaking Up

December 1, 2021

MLCommons today released its fifth round of MLPerf training benchmark results with Nvidia GPUs again dominating. That said, a few other AI accelerator companies Read more…

At SC21, Experts Ask: Can Fast HPC Be Green?

November 30, 2021

HPC is entering a new era: exascale is (somewhat) officially here, but Moore’s law is ending. Power consumption and other sustainability concerns loom over the enormous systems and chips of this new epoch, for both cost and compliance reasons. Reconciling the need to continue the supercomputer scale-up while reducing HPC’s environmental impacts... Read more…

Raja Koduri and Satoshi Matsuoka Discuss the Future of HPC at SC21

November 29, 2021

HPCwire's Managing Editor sits down with Intel's Raja Koduri and Riken's Satoshi Matsuoka in St. Louis for an off-the-cuff conversation about their SC21 experience, what comes after exascale and why they are collaborating. Koduri, senior vice president and general manager of Intel's accelerated computing systems and graphics (AXG) group, leads the team... Read more…

Jack Dongarra on SC21, the Top500 and His Retirement Plans

November 29, 2021

HPCwire's Managing Editor sits down with Jack Dongarra, Top500 co-founder and Distinguished Professor at the University of Tennessee, during SC21 in St. Louis to discuss the 2021 Top500 list, the outlook for global exascale computing, and what exactly is going on in that Viking helmet photo. Read more…

IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits?

November 3, 2021

On October 1 of this year, IonQ became the first pure-play quantum computing start-up to go public. At this writing, the stock (NYSE: IONQ) was around $15 and its market capitalization was roughly $2.89 billion. Co-founder and chief scientist Chris Monroe says it was fun to have a few of the company’s roughly 100 employees travel to New York to ring the opening bell of the New York Stock... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

AMD Launches Milan-X CPU with 3D V-Cache and Multichip Instinct MI200 GPU

November 8, 2021

At a virtual event this morning, AMD CEO Lisa Su unveiled the company’s latest and much-anticipated server products: the new Milan-X CPU, which leverages AMD’s new 3D V-Cache technology; and its new Instinct MI200 GPU, which provides up to 220 compute units across two Infinity Fabric-connected dies, delivering an astounding 47.9 peak double-precision teraflops. “We're in a high-performance computing megacycle, driven by the growing need to deploy additional compute performance... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

Leading Solution Providers

Contributors

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed

November 24, 2021

Details about two previously rumored Chinese exascale systems came to light during last week’s SC21 proceedings. Asked about these systems during the Top500 media briefing on Monday, Nov. 15, list author and co-founder Jack Dongarra indicated he was aware of some very impressive results, but withheld comment when asked directly if he had... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

2021 Gordon Bell Prize Goes to Exascale-Powered Quantum Supremacy Challenge

November 18, 2021

Today at the hybrid virtual/in-person SC21 conference, the organizers announced the winners of the 2021 ACM Gordon Bell Prize: a team of Chinese researchers leveraging the new exascale Sunway system to simulate quantum circuits. The Gordon Bell Prize, which comes with an award of $10,000 courtesy of HPC pioneer Gordon Bell, is awarded annually... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

IBM Introduces its First Power10-based Server, the Power E1080; Targets Hybrid Cloud

September 8, 2021

IBM today introduced the Power E1080 server, its first system powered by a Power10 IBM microprocessor. The new system reinforces IBM’s emphasis on hybrid cloud markets and the new chip beefs up its inference capabilities. IBM – like other CPU makers – is hoping to make inferencing a core capability... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire