ORNL Summit Supercomputer Is Officially Here

By Tiffany Trader

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer today at an event presided over by DOE Secretary Rick Perry. The partners, who collaborated to design and build the estimated $200-million dollar machine under the CORAL procurement program, heralded it as the world’s most powerful supercomputer with 200 peak petaflops for high-performance computing workloads and 3.3 peak exaops for emerging AI workloads.

The deployment encompasses 4,608 compute nodes, each containing two 22-core IBM Power9 processors and six Nvidia Tesla V100 GPUs, interconnected with dual-rail Mellanox EDR 100Gb/s InfiniBand. Summit is said to offer 8X more performance than its predecessor, Titan, which spans 18,688 AMD-Nvidia nodes. The new supercomputer has a power footprint of 13MW, not a significant increase over Titan’s 9MW considering the massive performance leap. Summit will include a 250PB IBM Spectrum Scale file system. This parallel file system, named Alpine

DOE Secretary Rick Perry at Summit unveiling

Perry upheld Summit’s installation as a sign of the United States’ global competitiveness and technological leadership:

“We know we’re in a competition and we know that this competition is real and it matters who gets there first,” said Perry. “Today [we] show the rest of the world that America is back in the game and we’re back in the game in a big way. Our national security, our economics, our scientific discovery, our energy research will be affected in a powerful way.”

Perry warned however that the U.S. also faces a challenge. “There are other nations that are racing to develop their technology; if we’re not dedicated and determined, the leadership we enjoy today could be the leadership of tomorrow and we don’t want that,” he said.

While this soft-launch (formal acceptance is scheduled for later this year) is an important milestone that is generating wide media attention, the HPC community proper is still awaiting and expects hard benchmarks; they won’t have to wait too much longer with the next Top500 list due out in two weeks. If Summit achieves the Linpack score that we’ve heard projected, roughly 120-petaflops, the United States could retake the Top500 crown from China, pending no surprises. China has held the top of the list since 2013 with the debut of the 33.9-petaflops (Linpack) Tianhe-2A. That machine fell to number two in 2016, when China stood up the 93-petaflops (Linpack) Sunway TaihuLight, which still holds the number one spot. The fastest U.S. machine is still the Oak Ridge Titan supercomputer, which entered the list at the number one position in November 2012 (with 17.6 Linpack petaflops) and now ranks fifth.

Perry emphasized the importance of supercomputing leadership to the United States’ administration, stating, “President Trump is determined to make America first in supercomputing.” He referenced the President’s March budget, noting it includes $677 million in funding for exascale activities, and indicated further funding increases are likely. (See our latest exascale budget coverage here.) The procurement process for Summit’s successor, named Frontier, is already underway. The plan is for the CORAL-2 machine to be the nation’s first capable exascale supercomputer with delivery timed for the second half of 2021.

The Linpack metric that the Top500 listing is based on, though imperfect, is a more meaningful way to rank machines than peak capability. Of course, the only benchmark that really matters is how a supercomputer performs on real applications. At the unveiling today, ORNL Director Thomas Zacharia noted that one of the earliest science applications carried out on Summit broke the mixed-precision exascale barrier.

Each Summit node uses six Nvidia Volta GPUs per two Power9 CPUs, tied together with Nvidia’s NVLink 2.0 technology (Image credit: Jason Richards/ORNL)

During early testing, researchers at Oak Ridge achieved 1.88 exaops using Summit’s V100 GPU Tensor cores to run a comparative genomics code that analyzes variation between human genome sequences. The run was carried out using a representative dataset on 4,000 nodes, achieving a computational efficiency of greater than 50 percent. Summit enabled a 25-fold speedup for the code compared to the lab’s previous leadership-class supercomputer Titan with the Tensor cores alone providing a 4.5-fold application speedup. (See ORNL’s writeup for more details.)

Summit, according to Oak Ridge and its partners, is poised to provide unprecedented computing power and deep learning capability to enable scientific discoveries that were previously impractical or impossible, and will advance research in energy, advanced materials and artificial intelligence (AI) and other domains. Its power will also be lent to improving the care of military veterans through a partnership with the US Department of Veterans Affairs that began in 2016.

Some of the science projects slated to run on Summit (as described by Oak Ridge):

Astrophysics

Exploding stars, known as supernovas, supply researchers with clues related to how heavy elements—including the gold in jewelry and iron in blood—seeded the universe.

The highly scalable FLASH code models this process at multiple scales—from the nuclear level to the large-scale hydrodynamics of a star’s final moments. On Summit, FLASH will go much further than previously possible, simulating supernova scenarios several thousand times longer and tracking about 12 times more elements than past projects.

“It’s at least a hundred times more computation than we’ve been able to do on earlier machines,” said ORNL computational astrophysicist Bronson Messer. “The sheer size of Summit will allow us to make very high-resolution models.”

Materials

Developing the next generation of materials, including compounds for energy storage, conversion and production, depends on subatomic understanding of material behavior. QMCPACK, a quantum Monte Carlo application, simulates these interactions using first-principles calculations.

Up to now, researchers have only been able to simulate tens of atoms because of QMCPACK’s high computational cost. Summit, however, can support materials composed of hundreds of atoms, a jump that aids the search for a more practical superconductor—a material that can transmit electricity with no energy loss.

“Summit’s large, on-node memory is very important for increasing the range of complexity in materials and physical phenomena,” said ORNL staff scientist Paul Kent. “Additionally, the much more powerful nodes are really going to help us extend the range of our simulations.”

Cancer Surveillance

One of the keys to combating cancer is developing tools that can automatically extract, analyze and sort existing health data to reveal previously hidden relationships between disease factors such as genes, biological markers and environment. Paired with unstructured data such as text-based reports and medical images, machine learning algorithms scaled on Summit will help supply medical researchers with a comprehensive view of the U.S. cancer population at a level of detail typically obtained only for clinical trial patients.

This cancer surveillance project is part of the CANcer Distributed Learning Environment, or CANDLE, a joint initiative between DOE and the National Cancer Institute.

“Essentially, we are training computers to read documents and abstract information using large volumes of data,” ORNL researcher Gina Tourassi said. “Summit enables us to explore much more complex models in a time efficient way so we can identify the ones that are most effective.”

Systems Biology

Applying machine learning and AI to genetic and biomedical datasets offers the potential toaccelerate understanding of human health and disease outcomes.

Using a mix of AI techniques on Summit, researchers will be able to identify patterns in the function, cooperation and evolution of human proteins and cellular systems. These patterns can collectively give rise to clinical phenotypes, observable traits of diseases such as Alzheimer’s, heart disease or addiction, and inform the drug discovery process.

Through a strategic partnership project between ORNL and the U.S. Department of Veterans Affairs, researchers are combining clinical and genomic data with machine learning and Summit’s advanced architecture to understand the genetic factors that contribute to conditions such as opioid addiction.

“The complexity of humans as a biological system is incredible,” said ORNL computational biologist Dan Jacobson. “Summit is enabling a whole new range of science that was simply not possible before it arrived.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

CFD on ORNL’s Titan Simulates Cleaner, Low-MPG ‘Opposed Piston’ Engine

December 13, 2018

Pinnacle Engines is out to substantially improve vehicle gasoline efficiency and cut greenhouse gas emissions with a new motor based on an “opposed piston” design that the company hopes will be widely adopted while t Read more…

By Doug Black

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC) is procuring from Atos in two phases over the next year-an Read more…

By Tiffany Trader

Nvidia Leads Alpha MLPerf Benchmarking Round

December 12, 2018

Seven months after the launch of its AI benchmarking suite, the MLPerf consortium is releasing the first round of results based on submissions from Nvidia, Google and Intel. Of the seven benchmarks encompassed in version Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

4 Ways AI Analytics Projects Fail — and How to Succeed

“How do I de-risk my AI-driven analytics projects?” This is a common question for organizations ready to modernize their analytics portfolio. Here are four ways AI analytics projects fail—and how you can ensure success. Read more…

Neural Network ‘Synapse’ Technology Showcased at IEEE Meeting

December 12, 2018

There’s nice snapshot of advancing work to develop improved neural network “synapse” technologies posted yesterday on IEEE Spectrum. Lower power, ease of use, manufacturability, and performance are all key paramete Read more…

By John Russell

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Nvidia Leads Alpha MLPerf Benchmarking Round

December 12, 2018

Seven months after the launch of its AI benchmarking suite, the MLPerf consortium is releasing the first round of results based on submissions from Nvidia, Goog Read more…

By Tiffany Trader

IBM, Nvidia in AI Data Pipeline, Processing, Storage Union

December 11, 2018

IBM and Nvidia today announced a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based DGX-1 AI server to pr Read more…

By Doug Black

Mellanox Uses Univa to Extend Silicon Design HPC Operation to Azure

December 11, 2018

Call it a corollary to Murphy’s Law: When a system is most in demand, when end users are most dependent on the system performing as required, when it’s crunch time – that’s when the system is most likely to blow up. Or make you wait in line to use it. Read more…

By Doug Black

Topology Can Help Us Find Patterns in Weather

December 6, 2018

Topology--the study of shapes--seems to be all the rage. You could even say that data has shape, and shape matters. Shapes are comfortable and familiar concepts, so it is intriguing to see that many applications are being recast to use topology. For instance, looking for weather and climate patterns. Read more…

By James Reinders

Zettascale by 2035? China Thinks So

December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. Read more…

By Tiffany Trader

Robust Quantum Computers Still a Decade Away, Says Nat’l Academies Report

December 5, 2018

The National Academies of Science, Engineering, and Medicine yesterday released a report – Quantum Computing: Progress and Prospects – whose optimism about Read more…

By John Russell

Revisiting the 2008 Exascale Computing Study at SC18

November 29, 2018

A report published a decade ago conveyed the results of a study aimed at determining if it were possible to achieve 1000X the computational power of the the Read more…

By Scott Gibson

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

The Convergence of Big Data and Extreme-Scale HPC

August 31, 2018

As we are heading towards extreme-scale HPC coupled with data intensive analytics like machine learning, the necessary integration of big data and HPC is a curr Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This