TOP500 Reanalysis Shows ‘Nothing Wrong with Moore’s Law’

By Tiffany Trader

November 20, 2015

In Tuesday night’s TOP500 session, list co-creator and co-author Erich Strohmaier brought perspective to what could at first appear to be a land grab of unprecedented scale by China, when he shared that many of these new entrants were mid-lifecycle systems that were just now being benchmarked. But what is likely to be even more revealing is his reanalysis of what the TOP500 says about the apparent health of Moore’s law. Could Intel be right about this after all? And that’s not the only common wisdom that got trounced. Accelerator growth also came under scrutiny. Let’s dive in.

Joined onstage by his co-authors Horst Simon, Jack Dongarra and Martin Meuer, as well as HLRS research scientist Vladimir Marjanovic who would also present, Dr. Strohmaier, head of the Future Technologies Group at Berkeley Lab, began with a review of the top ten, which taken as a set comprise the most mature crop of elite iron in the list’s history. There were two new entrants to that camp, both Crays: the first-part of the Trinity install for Los Alamos and Sandia national laboratories with 8.1 petaflops LINPACK; and the Hazel-Hen system, installed at the HLRS in Germany, the most powerful PRACE machine with 5.6 petaflops LINPACK at number eight.

The biggest change on this November’s list was the number of systems from China — 109 installed systems up from 37 in July — cementing’s China’s number two in system share behind the United States, which only managed a list market share of 40 percent, down from a typical 50-60 percent footprint.

TOP500 SC15 Performance of Countries

But what Strohmaier said he likes to look at more than pure system share is aggregate installed performance of systems, which provides a ranking of peak systems by size, filtering out the effect you might have from a lot of small systems.

“If you look instead at the development of installed performance over time, you see the last ten years that China had had a tremendous increase in terms of installed performance,” Strohmaier remarked. “It is just ahead of Japan now — clearly the second most important geographic region in terms of installed capability, but it’s not nearly as close to US [as when looking at number of systems].”

Going one step further, the list author clarified that the systems installed in China are actually on the small size, excepting their flagship Tianhe-2, the 33.86 petaflops supercomputer, developed by China’s National University of Defense Technology, which has been sitting in the number one spot for six list iterations.

“China has had a tremendous run in the last decade,” Strohmaier observed, “and it’s continuing, but it’s not as dramatic as a simple system count would suggest.”

By number of systems, HP clearly is dominant in terms of market share. Cray is number two, and third is Sugon, the surprise company on the list. Sugon has 49 systems on the latest TOP500 system, a 9.8 percent system share. As the TOP500 list co-author discussed during the TOP500 BoF, Sugon’s story from a performance perspective looks a little different. The company captured over 21 petaflops for a 5 percent market share, which positions them in seventh place, below Cray (25 percent), IBM (15 percent), HP (13 percent), NUDT (9 percent), SGI (7 percent), and Fujitsu (also rounded to 5 percent – but with a slightly higher 22 petaflops).

Strohmaier went on to make the point that Sugon is new to the TOP500 and had to learn how to run the LINPACK benchmark and submit to the list. The company increased its list share from 5 on the previous listing to 49 systems – since one fell off, that means 45 systems were added.

“Sugon really took the effort, and the energy and the work and ran the benchmark on all their installations, regardless of how well or badly they performed and gave us the number,” said Strohmaier, “They went to great length to figure out where they are in terms of supercomputing, in terms of what the systems can do and in terms of where they’ll be in the statistics.”

He gave due to the company and the individuals within it that made this possible. “Sugon is now number three, while before it had very little list presence,” he added.

The kicker here, however, is that these are not new systems, which really would be an extraordinary feat if they were. “Many of the additions are … two to three years old, which had never been measured or submitted until now,” Strohmaier clarified.

The list also reflects the shake-up from the IBM x86 offload to Lenovo, leaving a rather confusing four-way division represented by the following categories: IBM, Lenovo, Lenovo/IBM and IBM/Lenovo. These “artifacts” will disappear over time, but right now this arrangement that was worked out between the vendors and customers dilutes the original IBM and Lenovo categories.

Lenovo is of course a Chinese company with a mix of systems that they built and sold as well as previous IBM systems that they now hold title to. Then there’s Inspur with 15 systems, another Chinese vendor. In all, there are three Chinese companies which are now prominent in the TOP500 and that produced an influx of Chinese systems, said Strohmaier.

He went on to examine a vendor’s total FLOPS as a percentage of list share, which shows that “HP traditionally installs small systems, Cray installs large systems, and then there is Sugon, which is an exception, because they have smaller systems, thus their share of performance is much smaller.” IBM, which is closest in system share with 45 systems (mostly leftover BlueGenes), has a large market share in terms of performance because they kept custody of the large Blue Gene systems. Inspur and Lenovo both have below average list share, while Fujitsu and NUDT have much larger shares which are of course reflecting their flagship systems, K computer and Tianhe-2 respectively.

Switching back to looking at the list in general, Strohmaier addressed the low turnover of the last couple of years. Before 2008 the average system age was 1.27 years, now it’s a tick below three years, marginally better than the June list. The TOP500 author attributed this to the bolstering influence from Sugon and from the IBM-Lenovo offload. “Customers keep their systems longer than they used to; this has not changed other than that small upturn [which can be explained].”

Moore’s law is fine!

The classic slide from each list iteration is the one that shows how performance grows over time with the performance of the first, the last and the sum of the TOP500, which Strohmaier thinks of as “500 times the average.”

TOP500 SC15 Performance Development

There has been impressive growth and for many years it was very accurate for predicting future growth, but in the last few years, the inflection points have appeared, 2008 and another in 2014, where the trajectory reduces.

This raises two important questions, says Strohmaier: why is there an inflection point and why is the inflection point in two timestamps?

“The nice thing is that the old growth rate before the inflection points were the same on both lines and the new growth rates are again the same on both lines. So the one effect is clearly technology, the other, in my opinion, is financial,” noted Strohmaier.

TOP500 SC15 Projected Performance Development

In the slide that shows the projections to the end of the decade before and after the inflection point, it can be seen that a seemingly small variance results in a significant 10X differential by the end of the decade.

“So instead of having an exascale computer by 2019 as we may have predicted ten years ago; we now think it’s going to be more like the middle of the next decade,” Strohmaier stated.

“Looking into what actually happened, we have to be more careful in how we construct the basis for our statistics,” he continued. “The TOP500 is an inventory based list, new and old technology are all mixed up. If you really want to see the changes in technology on the list, you have to apply filters filter out all the new systems with new technology coming into the list and analyze that subset.”

Strohmaier filtered out all the new systems and further filtered out all systems which only use traditional superscalar processes, so Nvidia chips and no Intel Phis. The point of this exercise was to tease out the track of traditional processor technology.

This is the result:

TOP500 SC15 Tech Trends Scalar Processors - Moore's Law is fine

Strohmaier:

What you see is that the performance per core has taken a dramatic hit around 2005-2006, but it was compensated by our ability to put more and more cores on a single chip, which is the red curve, and if you multiple that out as in performance per socket per actual chip, you get to the blue curve, which is actually pretty much Moore’s law. So what you see is the sample there is no clear indication that there is anything wrong with Moore’s law.

So what caused the slow-down in the performance curve?

The other thing is over the decades we put more and more components into our very large systems. I tried to approximate that by looking at the number of chip sockets per scalar process we have on these large systems — that’s what you see on the red curve – while the average performance follows Moore’s law, the red line does not follow a clear exponential growth rate after about 2005-2006.

TOP500 SC15 Tech Trends Scalar Systems

At that point, we seem to run out of steam and all out of money in our ability to put more components in the very large systems and the very large systems are not growing overall in size anymore as they have before. That is my interpretation of the data – that is why we have an inflection that is why the overall performance growth in the TOP500 has been reduced from its previous levels.

Right now supercomputing grows with Moore’s law, just like when supercomputing began and it does not which we had seen before.

So it’s clearly a technological reason, but it’s not a reason on a chip, it’s actually a reason on the facility and system level that is most likely related to either power or money or both.

Accelerator stagnation

Strohmaier, who has been one of the more ardent defenders of the benefits of LINPACK as a unified benchmark, went on to explore accelerator trends, acknowledging that they are responsible for a considerable share of petaflops. “But if you look at what fraction of the overall list those accelerators contribute,” he went on, “and if you focus on the last two years, their share has actually stagnated if not fallen.”

“That means there is a hurdle that is linked to market penetration of those accelerators that have not been able to penetrate the markets beyond scientific computing. They have not gotten into the mainstream of HPC computing,” he added.

Power efficiency is another metric covered in the BoF. Looking at the top ten in terms of average power efficiency, the course is uneven, but it’s growing. Highest power efficiency is much better, however. These power winners tend to either use accelerators or be BlueGeneQ systems, which are engineered to power efficiency.

TOP500 SC15 Most Power Efficient Architectures

The chart above shows the standouts for highest power efficiency, with new machines highlighted in yellow. TSUBAME KFC, installed at the Tokyo Institute of Technology and upgraded to NVIDIA K80s from K20xs for the latest benchmarking, came in first. What surprised Strohmaier were the number two and three machines terms of power – Sugon and Inspur, respectively. And once again, every system on this list ranked for megaflops-per-watt has an accelerator on it. (More on the greenest systems will be forthcoming in a future piece.)

The final slide presented by Dr. Strohmaier plots the best application performance from the Gordon Bell prize that is awarded each year at SC with TOP500 to show correlation. Since these are different applications with potentially different systems, a close tracking between these two trends over time could be taken to suggest that the LINPACK is still a useful reflection of real world performance. This is something to dive deeper into another time, but for now, here is that slide:

TOP500 SC15_TOP500 vs Gordon Bell

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

ISC21 Cluster Competition Bracketology

June 18, 2021

For the first time ever, cluster competition experts have gathered together for an actual seeding reveal for the ISC21 Student Cluster Competition. What’s this, you ask? It’s where bona fide student cluster competi Read more…

OSC Enables On-Demand HPC for Automotive Engineering Firm

June 18, 2021

In motorsports, vehicle designers are constantly looking for the tiniest sliver of time to shave off through some clever piece of engineering – but as the low-hanging fruit gets snatched up, those advances are getting Read more…

PNNL Researchers Unveil Tool to Accelerate CGRA Development

June 18, 2021

Moore’s law is in decline due to the physical limits of transistor chips, putting an expiration date on a hitherto-perennial exponential trend in computing power – and leaving hardware developers scrambling to contin Read more…

TU Wien Announces VSC-5, Austria’s Most Powerful Supercomputer

June 17, 2021

Austria is getting a new top supercomputer: VSC-5, the latest iteration of the Vienna Scientific Cluster. The news was announced by VSC-5’s soon-to-be home, TU Wien (also known as the Vienna University of Technology). Read more…

Supercomputing Helps Advance Hydrogen Energy Research

June 16, 2021

Hydrogen energy has long remained an elusive target of the renewable energy industry, promising clean, carbon-free energy that would allow for rapid refueling, unlike current battery-based electric vehicles. Hydrogen-bas Read more…

AWS Solution Channel

Accelerating research and development for new medical treatments

Today, more than 290,000 researchers in France are working to provide better support and care for patients through modern medical treatment. To fulfill their mission, these researchers must be equipped with powerful tools. Read more…

FF4EuroHPC Initiative Highlights Results of First Open Call

June 16, 2021

EuroHPC is kicking into high gear, with seven of its first eight systems detailed – and one of them already operational. While the systems are, perhaps, the flashiest endeavor of the European Commission’s HPC effort, Read more…

TU Wien Announces VSC-5, Austria’s Most Powerful Supercomputer

June 17, 2021

Austria is getting a new top supercomputer: VSC-5, the latest iteration of the Vienna Scientific Cluster. The news was announced by VSC-5’s soon-to-be home, T Read more…

Catching up with ISC 2021 Digital Program Chair Martin Schulz

June 16, 2021

Leibniz Research Centre (LRZ)’s content creator Susanne Vieser interviews ISC 2021 Digital Program Chair, Prof. Martin Schulz to gain an understanding of his ISC affiliation, which is outside his usual scope of work at the research center and the Technical University of Munich. Read more…

Intel Debuts ‘Infrastructure Processing Unit’ as Part of Broader XPU Strategy

June 15, 2021

To boost the performance of busy CPUs hosted by cloud service providers, Intel Corp. has launched a new line of Infrastructure Processing Units (IPUs) that take Read more…

ISC Keynote: Glimpse into Microsoft’s View of the Quantum Computing Landscape

June 15, 2021

Looking for a dose of reality and realistic optimism about quantum computing? Matthias Troyer, Microsoft distinguished scientist, plans to do just that in his ISC2021 keynote in two weeks – Quantum Computing: From Academic Research to Real-world Applications. He notes wryly that classical... Read more…

A Carbon Crisis Looms Over Supercomputing. How Do We Stop It?

June 11, 2021

Supercomputing is extraordinarily power-hungry, with many of the top systems measuring their peak demand in the megawatts due to powerful processors and their c Read more…

Honeywell Quantum and Cambridge Quantum Plan to Merge; More to Follow?

June 10, 2021

Earlier this week, Honeywell announced plans to merge its quantum computing business, Honeywell Quantum Solutions (HQS), which focuses on trapped ion hardware, Read more…

ISC21 Keynoter Xiaoxiang Zhu to Deliver a Bird’s-Eye View of a Changing World

June 10, 2021

ISC High Performance 2021 – once again virtual due to the ongoing pandemic – is swiftly approaching. In contrast to last year’s conference, which canceled Read more…

Xilinx Expands Versal Chip Family With 7 New Versal AI Edge Chips

June 10, 2021

FPGA chip vendor Xilinx has been busy over the last several years cranking out its Versal AI Core, Versal Premium and Versal Prime chip families to fill customer compute needs in the cloud, datacenters, networks and more. Now Xilinx is expanding its reach to the booming edge... Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire