The Countdown to the Next Gordon Bell Prize

By Tiffany Trader

November 17, 2014

With so much on the menu at SC with its exceptional program of technical papers, tutorials, research posters, and Birds-of-a-Feather (BOF) sessions, it’s difficult to choose the best part, but it’s safe to say that the Gordon Bell Prize is not just a highlight of SC, it’s one of the highest honors in HPC. Every year since 1987, an uber-talented group of finalists raises the bar on parallel computing by applying HPC to range of important science, engineering, and large-scale data analytics problems. Winners must demonstrate an outstanding achievement in one of three areas: peak performance, scalability and time-to-solution, or a special achievement. They are also asked to justify their entries with regard to their real-world benefit as well as their contribution to the broader HPC community.

The competition is funded by its namesake Gordon Bell, a pioneer in computer architecture, parallel processing and high performance computing, and this year five teams are contending for the coveted prize. In addition to the first-place $10,000 cash award, one runner-up will be selected for Honorable Mention. The Association for Computing Machinery’s (ACM) awards committee will announce the results at the 26th annual Supercomputing Conference (SC) awards ceremony less than a week away in New Orleans.

As a prelude to this well-attended session, here is an overview of the five accomplished teams, who are doing their part to advance parallel computing through new or specialized architectures, advances in algorithms and applications, and other optimizations that exploit the potential of large-scale systems.

The five papers/teams are:

  • “Petascale High Order Dynamic Rupture Earthquake Simulations on Heterogeneous Supercomputers,” an international research project co-led by Michael Bader (Technische Universität München, Germany), Christian Pelties (Ludwig-Maximilians-Universität, Germany) and Alexander Heinecke (Intel, United States).
  • “Physics-based urban earthquake simulation enhanced by 10.7 BlnDOF x30 K time-step unstructured FE non-linear seismic wave simulation,” from a Japanese research team, led by University of Tokyo’s Tsuyoshi Ichimura.
  • “Real-time Scalable Cortical Computing at 46 Giga-Synaptic OPS/Watt with ~100× Speedup in Time-to-Solution and ~100,000× Reduction in Energy-to-Solution,” with research led by Dharmendra S. Modha, IBM Fellow and IBM Chief Scientist, Brain-inspired Computing, and additional team members from IBM and Cornell University.
  • “Anton 2: Raising the Bar for Performance and Programmability in a Special-Purpose Molecular Dynamics Supercomputer,” with lead researcher David E. Shaw, of DE Shaw Research, and team.
  • “24.77 Pflops on a Gravitational Tree-Code to Simulate the Milky Way Galaxy with 18600 GPUs,” with research led by Simon Portegies Zwart and Jeroen Bédorf of the Netherland’s Leiden Observatory and team from SURFsara Amsterdam, the National Astronomical Observatory of Japan, RIKEN AICS, and the University of Tsukuba (Japan).

Each of these teams will be presenting their paper talks next week on Tuesday and Wednesday, in advance of the award announcement on Thursday.
The authors of “Petascale High Order Dynamic Rupture Earthquake Simulations on Heterogeneous Supercomputers” report achieving unprecedented earth model complexity on an Intel Xeon Phi platform (China’s Tianhe-2 supercomputer). They carried out architecture-aware optimizations to the SeisSol code that deliver up to 50 percent of peak performance. While SeisSol delivers near-optimal weak scaling, reaching 8.6 DP-PFLOPS on 8,192 nodes of the Tianhe-2 supercomputer, the team’s performance model projects reaching 18-20 DP-PFLOPS on the full Tianhe-2 machine. They anticipate this having real-world benefits for modern civil engineering.

The next entry is notable for its humanitarian bent. “Physics-based urban earthquake simulation enhanced by 10.7 BlnDOF x30 K time-step unstructured FE non-linear seismic wave simulation,” is on track to supporting earthquake response efforts. Intending to boost the reliability of urban earthquake response analyses, the team developed a hybrid seismic wave amplification simulation code, GAMERA. This unstructured 3-D finite-element-based MPI-OpenMP code was deployed on Japan’s K computer, where it was able to achieve a size-up efficiency of 87.1 percent using the entire machine. They also applied GAMERA to a physics-based urban earthquake response analysis for Tokyo. The team acknowledges this is still a very compute-intensive problem, but they say such analyses can improve the quality of disaster estimations.

For “Real-time Scalable Cortical Computing at 46 Giga-Synaptic OPS/Watt with ~100× Speedup in Time-to-Solution and ~100,000× Reduction in Energy-to-Solution,” IBM and Cornell University researchers united to develop a parallel, event-driven kernel for neurosynaptic computation, called TrueNorth. The brain-inspired neurosynaptic processor emphasizes efficiency of computation, memory, and communication. Its backers are targeting TrueNorth for a wide range of cognitive applications. They’ve already used a co-designed silicon expression of the kernel to run computer vision applications and complex recurrent neural network simulations.

The large D.E. Shaw Research team behind “Anton 2: Raising the Bar for Performance and Programmability in a Special-Purpose Molecular Dynamics Supercomputer” report that the second-generation Anton 2 excels at performance, programmability, and capacity compared to its predecessor, Anton 1. Anton 2 is up to ten times faster than Anton 1 with the same number of nodes, and operates 180 times faster than any general-purpose hardware platform, according to the developers. The focus of the upgrade was enabling fine-grained event-driven operation, said to improve performance by increasing the overlap of computation with communication.

Last, but not least, the final paper, “24.77 Pflops on a Gravitational Tree-Code to Simulate the Milky Way Galaxy with 18600 GPUs,” shows the long-term evolution of the Milky Way Galaxy using 1,000 times more particles. Simulations were performed on two leadership-class machines, the Swiss Piz Daint supercomputer and the US ORNL Titan, using the N-body gravitational tree-code Bonsai. On Piz Daint, the 51 billion particle simulation achieved parallel efficiency of Bonsai above 95 percent, but the highest performance was achieved on Titan’s GPUs with a 242 billion particle Milky Way model. The Titan demo, which harnessed 18,600 GPUs, reached a sustained GPU performance of 33.49 petaflops and application performance of 24.77 petaflops.

Given the breadth and depth of these projects it is clear that the next winner of the Gordon Bell Prize next will join an elite list of past prize winners. Last year’s award went to the team responsible for “11 PFLOP/s Simulations of Cloud Cavitation Collapse,” by Diego Rossinelli, Babak Hejazialhosseini, Panagiotis Hadjidoukas and Petros Koumoutsakos, all of ETH Zurich; Costas Bekas and Alessandro Curioni of IBM Zurich Research Laboratory; Adam Bertsch and Scott Futral of Lawrence Livermore National Laboratory; and Steffen Schmidt and Nikolaus Adams of Technical University Munich.

In what IBM termed the “largest simulation ever in fluid dynamics,” the high throughput simulations of cloud cavitation collapse on 1.6 million cores of Sequoia reached 55 percent of its peak performance, corresponding to 11 petaflops. (This later rose to 14.4 petaflops sustained performance.) According to the authors, “the software successfully addresses the challenges that hinder the effective solution of complex flows on contemporary supercomputers, such as limited memory bandwidth, I/O bandwidth and storage capacity.” By boosting the quantitative prediction of cavitation, the breakthrough fluid dynamics simulations can help improve the design of high pressure fuel injectors and propellers and boost the performance of water purification systems and kidney lithotripsy. There is also an emerging therapeutic modality for cancer treatment. The paper is published in the Proceedings of SC’13.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Top Ten Ways AI Affects HPC in 2019

March 26, 2019

AI workloads are becoming ubiquitous, including running on the world’s fastest computers — thereby changing what we call HPC forever. As every organization plans for the future, AI workloads are on our minds — how Read more…

By James Reinders

GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, provided a brief but insightful portrait of Nvidia’s rese Read more…

By John Russell

ORNL Helps Identify Challenges of Extremely Heterogeneous Architectures

March 21, 2019

Exponential growth in classical computing over the last two decades has produced hardware and software that support lightning-fast processing speeds, but advancements are topping out as computing architectures reach thei Read more…

By Laurie Varma

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Insurance: Where’s the Risk?

Insurers are facing extreme competitive challenges in their core businesses. Property and Casualty (P&C) and Life and Health (L&H) firms alike are highly impacted by the ongoing globalization, increasing regulation, and digital transformation of their client bases. Read more…

Interview with 2019 Person to Watch Jim Keller

March 21, 2019

On the heels of Intel's reaffirmation that it will deliver the first U.S. exascale computer in 2021, which will feature the company's new Intel Xe architecture, we bring you our interview with our 2019 Person to Watch Jim Keller, head of the Silicon Engineering Group at Intel. Read more…

By HPCwire Editorial Team

Top Ten Ways AI Affects HPC in 2019

March 26, 2019

AI workloads are becoming ubiquitous, including running on the world’s fastest computers — thereby changing what we call HPC forever. As every organization Read more…

By James Reinders

GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, Read more…

By John Russell

At GTC: Nvidia Expands Scope of Its AI and Datacenter Ecosystem

March 19, 2019

In the high-stakes race to provide the AI life-cycle solution of choice, three of the biggest horses in the field are IBM, Intel and Nvidia. While the latter is only a fraction of the size of its two bigger rivals, and has been in business for only a fraction of the time, Nvidia continues to impress with an expanding array of new GPU-based hardware, software, robotics, partnerships and... Read more…

By Doug Black

Nvidia Debuts Clara AI Toolkit with Pre-Trained Models for Radiology Use

March 19, 2019

AI’s push into healthcare got a boost yesterday with Nvidia’s release of the Clara Deploy AI toolkit which includes 13 pre-trained models for use in radiolo Read more…

By John Russell

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Quick Take: Trump’s 2020 Budget Spares DoE-funded HPC but Slams NSF and NIH

March 12, 2019

U.S. President Donald Trump’s 2020 budget request, released yesterday, proposes deep cuts in many science programs but seems to spare HPC funding by the Depar Read more…

By John Russell

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Move Over Lustre & Spectrum Scale – Here Comes BeeGFS?

November 26, 2018

Is BeeGFS – the parallel file system with European roots – on a path to compete with Lustre and Spectrum Scale worldwide in HPC environments? Frank Herold Read more…

By John Russell

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This