TOP500 Reanalysis Shows ‘Nothing Wrong with Moore’s Law’

By Tiffany Trader

November 20, 2015

In Tuesday night’s TOP500 session, list co-creator and co-author Erich Strohmaier brought perspective to what could at first appear to be a land grab of unprecedented scale by China, when he shared that many of these new entrants were mid-lifecycle systems that were just now being benchmarked. But what is likely to be even more revealing is his reanalysis of what the TOP500 says about the apparent health of Moore’s law. Could Intel be right about this after all? And that’s not the only common wisdom that got trounced. Accelerator growth also came under scrutiny. Let’s dive in.

Joined onstage by his co-authors Horst Simon, Jack Dongarra and Martin Meuer, as well as HLRS research scientist Vladimir Marjanovic who would also present, Dr. Strohmaier, head of the Future Technologies Group at Berkeley Lab, began with a review of the top ten, which taken as a set comprise the most mature crop of elite iron in the list’s history. There were two new entrants to that camp, both Crays: the first-part of the Trinity install for Los Alamos and Sandia national laboratories with 8.1 petaflops LINPACK; and the Hazel-Hen system, installed at the HLRS in Germany, the most powerful PRACE machine with 5.6 petaflops LINPACK at number eight.

The biggest change on this November’s list was the number of systems from China — 109 installed systems up from 37 in July — cementing’s China’s number two in system share behind the United States, which only managed a list market share of 40 percent, down from a typical 50-60 percent footprint.

TOP500 SC15 Performance of Countries

But what Strohmaier said he likes to look at more than pure system share is aggregate installed performance of systems, which provides a ranking of peak systems by size, filtering out the effect you might have from a lot of small systems.

“If you look instead at the development of installed performance over time, you see the last ten years that China had had a tremendous increase in terms of installed performance,” Strohmaier remarked. “It is just ahead of Japan now — clearly the second most important geographic region in terms of installed capability, but it’s not nearly as close to US [as when looking at number of systems].”

Going one step further, the list author clarified that the systems installed in China are actually on the small size, excepting their flagship Tianhe-2, the 33.86 petaflops supercomputer, developed by China’s National University of Defense Technology, which has been sitting in the number one spot for six list iterations.

“China has had a tremendous run in the last decade,” Strohmaier observed, “and it’s continuing, but it’s not as dramatic as a simple system count would suggest.”

By number of systems, HP clearly is dominant in terms of market share. Cray is number two, and third is Sugon, the surprise company on the list. Sugon has 49 systems on the latest TOP500 system, a 9.8 percent system share. As the TOP500 list co-author discussed during the TOP500 BoF, Sugon’s story from a performance perspective looks a little different. The company captured over 21 petaflops for a 5 percent market share, which positions them in seventh place, below Cray (25 percent), IBM (15 percent), HP (13 percent), NUDT (9 percent), SGI (7 percent), and Fujitsu (also rounded to 5 percent – but with a slightly higher 22 petaflops).

Strohmaier went on to make the point that Sugon is new to the TOP500 and had to learn how to run the LINPACK benchmark and submit to the list. The company increased its list share from 5 on the previous listing to 49 systems – since one fell off, that means 45 systems were added.

“Sugon really took the effort, and the energy and the work and ran the benchmark on all their installations, regardless of how well or badly they performed and gave us the number,” said Strohmaier, “They went to great length to figure out where they are in terms of supercomputing, in terms of what the systems can do and in terms of where they’ll be in the statistics.”

He gave due to the company and the individuals within it that made this possible. “Sugon is now number three, while before it had very little list presence,” he added.

The kicker here, however, is that these are not new systems, which really would be an extraordinary feat if they were. “Many of the additions are … two to three years old, which had never been measured or submitted until now,” Strohmaier clarified.

The list also reflects the shake-up from the IBM x86 offload to Lenovo, leaving a rather confusing four-way division represented by the following categories: IBM, Lenovo, Lenovo/IBM and IBM/Lenovo. These “artifacts” will disappear over time, but right now this arrangement that was worked out between the vendors and customers dilutes the original IBM and Lenovo categories.

Lenovo is of course a Chinese company with a mix of systems that they built and sold as well as previous IBM systems that they now hold title to. Then there’s Inspur with 15 systems, another Chinese vendor. In all, there are three Chinese companies which are now prominent in the TOP500 and that produced an influx of Chinese systems, said Strohmaier.

He went on to examine a vendor’s total FLOPS as a percentage of list share, which shows that “HP traditionally installs small systems, Cray installs large systems, and then there is Sugon, which is an exception, because they have smaller systems, thus their share of performance is much smaller.” IBM, which is closest in system share with 45 systems (mostly leftover BlueGenes), has a large market share in terms of performance because they kept custody of the large Blue Gene systems. Inspur and Lenovo both have below average list share, while Fujitsu and NUDT have much larger shares which are of course reflecting their flagship systems, K computer and Tianhe-2 respectively.

Switching back to looking at the list in general, Strohmaier addressed the low turnover of the last couple of years. Before 2008 the average system age was 1.27 years, now it’s a tick below three years, marginally better than the June list. The TOP500 author attributed this to the bolstering influence from Sugon and from the IBM-Lenovo offload. “Customers keep their systems longer than they used to; this has not changed other than that small upturn [which can be explained].”

Moore’s law is fine!

The classic slide from each list iteration is the one that shows how performance grows over time with the performance of the first, the last and the sum of the TOP500, which Strohmaier thinks of as “500 times the average.”

TOP500 SC15 Performance Development

There has been impressive growth and for many years it was very accurate for predicting future growth, but in the last few years, the inflection points have appeared, 2008 and another in 2014, where the trajectory reduces.

This raises two important questions, says Strohmaier: why is there an inflection point and why is the inflection point in two timestamps?

“The nice thing is that the old growth rate before the inflection points were the same on both lines and the new growth rates are again the same on both lines. So the one effect is clearly technology, the other, in my opinion, is financial,” noted Strohmaier.

TOP500 SC15 Projected Performance Development

In the slide that shows the projections to the end of the decade before and after the inflection point, it can be seen that a seemingly small variance results in a significant 10X differential by the end of the decade.

“So instead of having an exascale computer by 2019 as we may have predicted ten years ago; we now think it’s going to be more like the middle of the next decade,” Strohmaier stated.

“Looking into what actually happened, we have to be more careful in how we construct the basis for our statistics,” he continued. “The TOP500 is an inventory based list, new and old technology are all mixed up. If you really want to see the changes in technology on the list, you have to apply filters filter out all the new systems with new technology coming into the list and analyze that subset.”

Strohmaier filtered out all the new systems and further filtered out all systems which only use traditional superscalar processes, so Nvidia chips and no Intel Phis. The point of this exercise was to tease out the track of traditional processor technology.

This is the result:

TOP500 SC15 Tech Trends Scalar Processors - Moore's Law is fine

Strohmaier:

What you see is that the performance per core has taken a dramatic hit around 2005-2006, but it was compensated by our ability to put more and more cores on a single chip, which is the red curve, and if you multiple that out as in performance per socket per actual chip, you get to the blue curve, which is actually pretty much Moore’s law. So what you see is the sample there is no clear indication that there is anything wrong with Moore’s law.

So what caused the slow-down in the performance curve?

The other thing is over the decades we put more and more components into our very large systems. I tried to approximate that by looking at the number of chip sockets per scalar process we have on these large systems — that’s what you see on the red curve – while the average performance follows Moore’s law, the red line does not follow a clear exponential growth rate after about 2005-2006.

TOP500 SC15 Tech Trends Scalar Systems

At that point, we seem to run out of steam and all out of money in our ability to put more components in the very large systems and the very large systems are not growing overall in size anymore as they have before. That is my interpretation of the data – that is why we have an inflection that is why the overall performance growth in the TOP500 has been reduced from its previous levels.

Right now supercomputing grows with Moore’s law, just like when supercomputing began and it does not which we had seen before.

So it’s clearly a technological reason, but it’s not a reason on a chip, it’s actually a reason on the facility and system level that is most likely related to either power or money or both.

Accelerator stagnation

Strohmaier, who has been one of the more ardent defenders of the benefits of LINPACK as a unified benchmark, went on to explore accelerator trends, acknowledging that they are responsible for a considerable share of petaflops. “But if you look at what fraction of the overall list those accelerators contribute,” he went on, “and if you focus on the last two years, their share has actually stagnated if not fallen.”

“That means there is a hurdle that is linked to market penetration of those accelerators that have not been able to penetrate the markets beyond scientific computing. They have not gotten into the mainstream of HPC computing,” he added.

Power efficiency is another metric covered in the BoF. Looking at the top ten in terms of average power efficiency, the course is uneven, but it’s growing. Highest power efficiency is much better, however. These power winners tend to either use accelerators or be BlueGeneQ systems, which are engineered to power efficiency.

TOP500 SC15 Most Power Efficient Architectures

The chart above shows the standouts for highest power efficiency, with new machines highlighted in yellow. TSUBAME KFC, installed at the Tokyo Institute of Technology and upgraded to NVIDIA K80s from K20xs for the latest benchmarking, came in first. What surprised Strohmaier were the number two and three machines terms of power – Sugon and Inspur, respectively. And once again, every system on this list ranked for megaflops-per-watt has an accelerator on it. (More on the greenest systems will be forthcoming in a future piece.)

The final slide presented by Dr. Strohmaier plots the best application performance from the Gordon Bell prize that is awarded each year at SC with TOP500 to show correlation. Since these are different applications with potentially different systems, a close tracking between these two trends over time could be taken to suggest that the LINPACK is still a useful reflection of real world performance. This is something to dive deeper into another time, but for now, here is that slide:

TOP500 SC15_TOP500 vs Gordon Bell

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Officials, scientists and other stakeholders celebrated the new sy Read more…

By Staff report

US to Triple Its Supercomputing Capacity for Weather and Climate with Two New Crays

February 20, 2020

The blizzard of news around the race for weather and climate supercomputing leadership continues. Just three days after the UK announced a £1.2 billion plan to build the world’s largest weather and climate supercomputer, the U.S. National Oceanic and Atmospheric Administration... Read more…

By Oliver Peckham

Indiana University Researchers Use Supercomputing to Model the State’s Largest Watershed

February 20, 2020

With water stressors on the rise, understanding and protecting water supplies is more important than ever. Now, a team of researchers from Indiana University has created a new climate change data portal to help Indianans Read more…

By Staff report

TACC – Supporting Portable, Reproducible, Computational Science with Containers

February 20, 2020

Researchers who use supercomputers for science typically don't limit themselves to one system. They move their projects to whatever resources are available, often using many different systems simultaneously, in their lab Read more…

By Aaron Dubrow

China Researchers Set Distance Record in Quantum Memory Entanglement

February 20, 2020

Efforts to develop the necessary capabilities for building a practical ‘quantum-based’ internet have been ongoing for years. One of the biggest challenges is being able to maintain and manage entanglement of remote q Read more…

By John Russell

AWS Solution Channel

Challenging the barriers to High Performance Computing in the Cloud

Cloud computing helps democratize High Performance Computing by placing powerful computational capabilities in the hands of more researchers, engineers, and organizations who may lack access to sufficient on-premises infrastructure. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

New Algorithm Allows PCs to Challenge HPC in Weather Forecasting

February 19, 2020

Accurate weather forecasting has, by and large, been situated squarely in the domain of high-performance computing – just this week, the UK announced a nearly $1.6 billion investment in the world’s largest supercompu Read more…

By Oliver Peckham

US to Triple Its Supercomputing Capacity for Weather and Climate with Two New Crays

February 20, 2020

The blizzard of news around the race for weather and climate supercomputing leadership continues. Just three days after the UK announced a £1.2 billion plan to build the world’s largest weather and climate supercomputer, the U.S. National Oceanic and Atmospheric Administration... Read more…

By Oliver Peckham

Japan’s AIST Benchmarks Intel Optane; Cites Benefit for HPC and AI

February 19, 2020

Last April Intel released its Optane Data Center Persistent Memory Module (DCPMM) – byte addressable nonvolatile memory – to increase main memory capacity a Read more…

By John Russell

UK Announces £1.2 Billion Weather and Climate Supercomputer

February 19, 2020

While the planet is heating up, so is the race for global leadership in weather and climate computing. In a bombshell announcement, the UK government revealed p Read more…

By Oliver Peckham

The Massive GPU Cloudburst Experiment Plays a Smaller, More Productive Encore

February 13, 2020

In November, researchers at the San Diego Supercomputer Center (SDSC) and the IceCube Particle Astrophysics Center (WIPAC) set out to break the internet – or Read more…

By Oliver Peckham

Eni to Retake Industry HPC Crown with Launch of HPC5

February 12, 2020

With the launch of its Dell-built HPC5 system, Italian energy company Eni regains its position atop the industrial supercomputing leaderboard. At 52-petaflops p Read more…

By Tiffany Trader

Trump Budget Proposal Again Slashes Science Spending

February 11, 2020

President Donald Trump’s FY2021 U.S. Budget, submitted to Congress this week, again slashes science spending. It’s a $4.8 trillion statement of priorities, Read more…

By John Russell

Policy: Republicans Eye Bigger Science Budgets; NSF Celebrates 70th, Names Idea Machine Winners

February 5, 2020

It’s a busy week for science policy. Yesterday, the National Science Foundation announced winners of its 2026 Idea Machine contest seeking directions for futu Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This