China Scores Fifth TOP500 Win with Tianhe-2

By Tiffany Trader

July 14, 2015

When China grabbed the TOP500 crown for its Tianhe-2 supercomputer in June 2013 with double the peak FLOPS of the next fastest machine (Titan at Oak Ridge National Laboratory), could anyone have foreseen that the machine would still hold on to the top spot a full two years later? As we can see from the 45th edition of the twice-yearly TOP500 list, which was published Monday in tandem with the 2015 International Supercomputing Conference, not only did Tianhe-2, or “Milky Way-2,” retain its position, the top five systems remain unchanged:

TOP500 list June 2013 or June 2015

And in these last two years, spanning five iterations of the list, there are only two new machines in the top 10: Piz Daint, the Cray XC30 at the Swiss National Supercomputing Centre (CSCS), Switzerland, with 6.27 LINPACK petaflops and Shaheen II, the Cray XC40 installed at King Abdullah University of Science and Technology in Saudi Arabia, with 5.53 LINPACK petaflops coming in at the six and seventh spots respectively, pushing down SuperMUC (IBM/Lenovo, Leibniz Rechenzentrum, Germany, 2.89 petaflops) and Tianhe-1A (National Supercomputing Center in Tianjin, China, 2.56 petaflops).

That was the two-year view. Comparing the June 2015 with the previous list from November, there is only one new addition in the rarefied top 10 zone: Saudi Arabia’s Shaheen II, which pushed out the 3.57 petaflops Cray CS-Storm system installed at an undisclosed US government site.

The degree of stagnation at the top of supercomputing is unprecedented and speaks to the challenges that the upper echelon of high-performance computing is facing, most notably attributed to the limitations of Moore’s law by luminaries such as Berkeley Lab Deputy Director and TOP500 author Horst Simon and many others.

Illustrative of the severity of this slowdown, the TOP500 organizers confirm that the last two years have seen historically low year-over-year performance increases in the overall list, and that’s with the bolstering effect of the very large systems that sit on the top of the list. The current list reflects a combined performance of 363 petaflops for all 500 systems, compared to 309 petaflops six months ago and 274 petaflops one year ago.

Performance development TOP500 June 2015

Further, the performance of the last system on the list (#500) has lagged behind historical trends for the last six years with a marked shift in its performance trajectory. Since 2008, performance of that #500 system is rising 55 percent per year. Contrast that to the annual growth rate of 90 percent seen between 1994 to 2008 in the period of performance scaling that is coming to be recognized as the heyday of Moore’s law.

While the continued standstill is fairly dramatic, it didn’t come as a surprise. Most obviously, given multi-year procurement cycles, the slowdown we are seeing today is a holdover from recession-era investment levels. Perhaps just as significant, however, the major sites that buy these leadership-class systems have been in a holding pattern as they waited to see what technologies and architectures would provide the biggest value for their users and workloads.

In the past, with a strong Moore’s law driving faster, cheaper, more energy-efficient sequential processing advances, typical refresh cycles would center on next-generation CPUs, but with the move to more and more heterogeneity, there are multiple technology cycles to watch, this includes Tesla GPU SKUs from NVIDIA, the Intel MIC line, and the continued evolution of 64-bit ARM. There are also new memory and storage technologies (NVRAM, burst buffers, SSDs, memristors) and next-gen interconnects (NVIDIA’s NVLink, Intel Omni-Path, etc.).

Many of these promised advances are about to come to fruition and the pipelined systems that hinge on them will revive list motility; this should start to happen in the six months to a year. In the US, the ACES and CORAL collaboration efforts are on track to produce five major systems ranging from 30 to 180 petaflops. The Trinity supercomputer is contracted to provide the National Nuclear Security Administration (NNSA) with 40 petaflops of compute power. Installation of the $174 million Cray XC40 machine is scheduled for this summer, but it is unknown if there will be enough lead time to have it up and benchmarked by SC15 and the next iteration of the TOP500 list.

The system will be physically located in Los Alamos at the Nicholas Metropolis Center for Modeling and Simulation and will be managed and operated by Los Alamos National Laboratory and Sandia National Laboratories under the Alliance for Computing at Extreme Scale (ACES) partnership.

Other US systems are expected along the following timeline:

2016: Cori, NERSC (> 30 petaflops)
2017: Summit, ORNL, OLCF (150 petaflops)
2018: Sierra, LLNL, NNSA (150 petaflops)
2018: Aurora, ANL, ALCF (180 petaflops)

Japan and Europe are also ramping up their “exascale-focused” agendas, although most nations have given up the aim of hitting the 2020 timeframe. As Horst Simon has observed, if exascale were going to make the 2020 deadline in the US, the CORAL systems (Summit, Sierra and Aurora) would have had to be installed already, yet they are still two to three years off.

Given China’s long-running FLOPS lead, it was a top contender in the race to break the next 1,000X performance barrier. Tianhe-2 was due to receive an infusion of tens of thousands of Intel Xeon chips that would have expanded it past the 110 petaflops mark until bans put in place by the US government derailed those upgrade plans.

Last August, Intel was asked by the US government to apply for an export license authorizing the shipment to Chinese system maker Inspur, but the application was denied, blocking the chipmaker from assisting with Tianhe-2’s upgrade path. Shortly thereafter, four Chinese supercomputer centers were blacklisted by the US government on the grounds that they were “acting contrary to the national security or foreign policy interests of the United States.”

Many in the industry see this as political posturing and fear the move may backfire, spurring China to accelerate its homegrown chipmaking program. While a completely indigenous machine has long been a goal for the protectionist nation, ready access to US microprocessor technology in combination with the supreme difficulty of chip innovation had a dampening effect. With this critical componentry supply effectively shuttered, China will be forced to redouble its efforts to engineer a completely indigenous supercomputing stack.

In a sign of just how serious China is about building its domestic semiconductor business, albeit on the memory side, the Chinese state-owned chip designer Tsinghua Unigroup Ltd. just made a $23 billion bid for US memory manufacturer, Micron Technology. Analysts say the deal, which would be the largest transfer of its kind, would face intense scrutiny by US officials concerned with the security and anti-trust implications of allowing the last memory chipmaker in the US to be transferred to a state-controlled entity.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together any HPC and AI resources and integrate them with networking, Read more…

What’s New in HPC Research: Solar Power, ExaWorks, Optane & More

September 16, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

AWS Solution Channel

Supporting Climate Model Simulations to Accelerate Climate Science

The Amazon Sustainability Data Initiative (ASDI), AWS is donating cloud resources, technical support, and access to scalable infrastructure and fast networking providing high performance computing (HPC) solutions to support simulations of near-term climate using the National Center for Atmospheric Research (NCAR) Community Earth System Model Version 2 (CESM2) and its Whole Atmosphere Community Climate Model (WACCM). Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit technologies), the quantum computing landscape is transforming Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

Amazon, NCAR, SilverLining Team for Unprecedented Cloud Climate Simulations

September 10, 2021

Earth’s climate is, to put it mildly, not in a good place. In the wake of a damning report from the Intergovernmental Panel on Climate Change (IPCC), scientis Read more…

After Roadblocks and Renewals, EuroHPC Targets a Bigger, Quantum Future

September 9, 2021

The EuroHPC Joint Undertaking (JU) was formalized in 2018, beginning a new era of European supercomputing that began to bear fruit this year with the launch of several of the first EuroHPC systems. The undertaking, however, has not been without its speed bumps, and the Union faces an uphill... Read more…

How Argonne Is Preparing for Exascale in 2022

September 8, 2021

Additional details came to light on Argonne National Laboratory’s preparation for the 2022 Aurora exascale-class supercomputer, during the HPC User Forum, held virtually this week on account of pandemic. Exascale Computing Project director Doug Kothe reviewed some of the 'early exascale hardware' at Argonne, Oak Ridge and NERSC (Perlmutter), while Ti Leggett, Deputy Project Director & Deputy Director... Read more…

IBM Introduces its First Power10-based Server, the Power E1080; Targets Hybrid Cloud

September 8, 2021

IBM today introduced the Power E1080 server, its first system powered by a Power10 IBM microprocessor. The new system reinforces IBM’s emphasis on hybrid cloud markets and the new chip beefs up its inference capabilities. IBM – like other CPU makers – is hoping to make inferencing a core capability... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

Leading Solution Providers

Contributors

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire