Japan Takes Top Three Spots on Green500 List

By Tiffany Trader

August 4, 2015

Japan, the island nation renowned for its energy and space-saving design prowess, just nabbed the top three spots on the latest Green500 list and claimed eight of the 20 top spots. The Shoubu supercomputer from RIKEN took top honors on the 17th edition of the twice-yearly listing, becoming the first TOP500-level supercomputer to surpass the seven gigaflops-per-watt milestone.

Green500 list founder Wu-chun Feng, who goes by “Wu,” notes that being a small nation from a population standpoint, Japan cannot afford to be wasting or making less efficient use of critical power resources. “It’s a little deja vu,” he says, referring to the automobile landscape of the 1970s and 1980s, “it’s not that we as US people don’t care about power, but we have less constraints put on us when it comes to developing supercomputers. In Japan it’s all about efficiency, space efficiency and power efficiency.”

The same kind of innovation was seen in the automobile space where Japan led the world in designing and popularizing smaller and more fuel-efficient designs, leading up to the iconic green vehicle, the Toyota Prius.

Expectations were that a machine on this list would pass the six gigaflops-per-watt threshold. The previous green supercomputing champ, L-CSC from the GSI Helmholtz Center, was the first to overcome the 5 gigaflops-per-watt barrier, when it achieved 5.27 gigaflops-per-watt on the November 2014 list, but as we see Shoubu made it all the way to 7.03 gigaflops-per-watt.

Shoubu Japan Green500 champ June 2015

Shoubu was followed closely by two machines from the High Energy Accelerator Research Organization (KEK): Suiren Blue, which took second place with 6.84 gigaflops-per-watt, and Suiren, which claimed third place with 6.22 gigaflops-per-watt. All three of these machines have the distinction of being the first 6+ gigaflops-per-watt systems on the TOP500/Green500 lists, and all three were the result of a collaborative effort between fabless Japanese startup PEZY and immersion cooling company Exascalar.

All were built using PEZY’s second generation 1,024 core custom MIMD processor and Exascalar’s submersion liquid cooling technology. The lead machine Shoubu employed ExaScaler second-generation technology along with Intel’s Xeon E5-2618L v3 (8 cores / 16 threads, 2.3GHz ~ 3.4GHz) processor, equipped with 64GB memory and InfiniBand FDR. The “PEZY-SC” accelerator processor is said to offer 3 teraflops single-precision and 1.5 teraflops double-precision performance. Shoubu has a theoretical peak performance of 842.96 teraflops and measured LINPACK of 412.67 teraflops, sufficient for a 160th spot on the latest TOP500.

Suiren Blue and Suiren are smaller less-performant machines with the former achieving 193.91 teraflops LINPACK for a 392 TOP500 ranking and the latter delivering 206.57 teraflops LINPACK for a 366 placement.

Green500 June 2015 top 10 graphic

PEZY’s performance aspirations are no secret if you know what the company’s name stands for: PEZY = Peta, Exa, Zetta, Yotta. They were established in 2010 but came to the attention of the supercomputing community last year with the debut of their first TOP500 machine, Suiren. The team tried hard for a Green500 victory and fell just short, list founder Wu reports. Interestingly, their increased performance and energy-efficiency this year looks to have been achieved without any hardware changes. The TOP500 specs for each list (November 2014 and June 2015) show a machine with the same core count (262,784) and the same theoretical peak performance (373.02 teraflops).

But the LINPACK did change, from 178.1 to 206.6 teraflops, sufficient to boost Suiren’s TOP500 ranking from 369 to 366, and the machine’s total power use fell from 37.83 kW to 32.59 kW. These combined upgrades brought the machine’s energy efficiency up from 4.95 gigaflops-per-watt to 6.22 gigaflops-per-watt. If the PEZY/Exascaler partnership didn’t have two other systems in the running, the top spot would have been Suiren’s. A statement from Exascaler confirms that this 25.6 percent power performance boost was the result of carefully optimizing the software implementation.

Meeting Exascale Mandates

There are two ways to look at the current energy-efficiency, according to Green500 list custodian and Virginia Tech professor Wu Feng. In the positive sense, the trajectory set by the current list is on track to establish a 20-40 MW exascale supercomputer in the 2022 timeframe. But if the community were still targeting an exascale machine for the original 2018 timeframe, it would be a supercomputer on the order of 150 megawatts, extrapolating from this list.

Wu gets the sense that an increasing number of people are not worried about power because vendors are saying they will hit this goal. He warns that this could be a false sense of security. “Even if we do make the 20 MW target, we’ve bought ourselves four to six years by slipping the exascale target from 2018 to 2022-2024,” he says. “So that false sense of security that ‘power is not going to be a problem’ — well it is a problem if we were still looking at 2018.”

At the same time, Wu emphasizes that the DOE is taking exascale innovation very seriously and is investing research dollars into FastForward and related projects where vendors, such as AMD, Intel and NVIDIA, are focused on maximizing performance per watt.

“So these two aspects – the lengthened runway and the investment in energy-efficient technologies – have gone a long way toward addressing the thermal power envelop of these extreme-scale supercomputers,” he says.

“Even a 150 MW system, extrapolated out from today’s list, is historically quite good given that six years ago when you extrapolated out the power envelope linearly, we were well over a gigawatt for an exascale machine. So there has been an order of magnitude improvement. Combined with this additional runway, the expectation is that we’ll cover that last order of magnitude to get to the 20 MW target,” he says.

The HPC community’s enhanced focus on power and energy as first-order design constraints is also reflected in the relative diversity of the Green500 list compared with the TOP500 list. While the last several TOP500 lists have seen very little turnover in the top ten spots, nearly every Green500 experience some churn at the top. While the latest TOP500 saw only one new entrant in the top ten, the Green500 welcomed four machines into the top tier. To be fair, of course, the barrier to entry is less prohibitive since these Green500 champions can be (and tend to be) smaller TOP500 systems that cost orders of magnitude less to build than their more FLOPS-dominant list-mates.

The fall of the thermal power envelope has mainly been driven by heterogeneous supercomputers, powered by manycore chips from vendors like AMD, NVIDIA, Intel and now PEZY, says Wu. The top 32 supercomputers on the current list made use of accelerators, compared to the top 23 supercomputers on the November 2014 edition of the list, a 40 percent increase. Wu says the heterogeneous design gives the user or developer different kinds of silicon brains that can be matched with the task at hand to get performance and energy efficiencies out of executing tasks that weren’t there before.

One of those silicon “brains” that was conspicuously absent from the upper rankings of the Green500 was ARM. Wu notes that it is a challenge for these ARM licensers to make the jump from embedded mobile to the HPC server space, but he thinks ARM’s day is coming.

Wu is a champion of low-power computing, going back to his creation of the Green Destiny supercomputer in 2002. That machine had a 3.2-kilowatt power budget (the equivalent of two hairdryers) and a 101-gigaflop Linpack rating that would have placed it at number 393 of the 2002 TOP500 list. Wu says he understands the skepticism around ARM, but he expects that 64-bit ARM will begin to populate the Green500 within the next year or two.

“It could become the Toyota Prius of supercomputing,” he shares, noting that “it will have its place even it it isn’t at the upper echelons of the next exascale supercomputer.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from its predecessors, including the red-hot H100 and A100 GPUs. Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. While Nvidia may not spring to mind when thinking of the quant Read more…

2024 Winter Classic: Meet the HPE Mentors

March 18, 2024

The latest installment of the 2024 Winter Classic Studio Update Show features our interview with the HPE mentor team who introduced our student teams to the joys (and potential sorrows) of the HPL (LINPACK) and accompany Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the field was normalized for boys in 1969 when the Apollo 11 missi Read more…

Apple Buys DarwinAI Deepening its AI Push According to Report

March 14, 2024

Apple has purchased Canadian AI startup DarwinAI according to a Bloomberg report today. Apparently the deal was done early this year but still hasn’t been publicly announced according to the report. Apple is preparing Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimization algorithms to iteratively refine their parameters until Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, code-named Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Houston We Have a Solution: Addressing the HPC and Tech Talent Gap

March 15, 2024

Generations of Houstonian teachers, counselors, and parents have either worked in the aerospace industry or know people who do - the prospect of entering the fi Read more…

Survey of Rapid Training Methods for Neural Networks

March 14, 2024

Artificial neural networks are computing systems with interconnected layers that process and learn from data. During training, neural networks utilize optimizat Read more…

PASQAL Issues Roadmap to 10,000 Qubits in 2026 and Fault Tolerance in 2028

March 13, 2024

Paris-based PASQAL, a developer of neutral atom-based quantum computers, yesterday issued a roadmap for delivering systems with 10,000 physical qubits in 2026 a Read more…

India Is an AI Powerhouse Waiting to Happen, but Challenges Await

March 12, 2024

The Indian government is pushing full speed ahead to make the country an attractive technology base, especially in the hot fields of AI and semiconductors, but Read more…

Charles Tahan Exits National Quantum Coordination Office

March 12, 2024

(March 1, 2024) My first official day at the White House Office of Science and Technology Policy (OSTP) was June 15, 2020, during the depths of the COVID-19 loc Read more…

AI Bias In the Spotlight On International Women’s Day

March 11, 2024

What impact does AI bias have on women and girls? What can people do to increase female participation in the AI field? These are some of the questions the tech Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Analyst Panel Says Take the Quantum Computing Plunge Now…

November 27, 2023

Should you start exploring quantum computing? Yes, said a panel of analysts convened at Tabor Communications HPC and AI on Wall Street conference earlier this y Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Training of 1-Trillion Parameter Scientific AI Begins

November 13, 2023

A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers. Argonne N Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire