Four Teams Using ORNL’s Summit Supercomputer Named Finalists in 2020 Gordon Bell Prize

November 11, 2020

Nov. 11, 2020 — Since 1987, the Association for Computing Machinery has awarded the annual Gordon Bell Prize to recognize outstanding achievements in high-performance computing (HPC). Presented each year at the International Conference for High-Performance Computing, Networking, Storage and Analysis (SC), the prizes not only reward innovative projects that employ HPC for applications in science, engineering, and large-scale data analytics but also provide a timeline of milestones in parallel computing.

As a frequent home to the world’s most powerful and smartest scientific supercomputers, the US Department of Energy’s (DOE’s) Oak Ridge National Laboratory (ORNL) has hosted many previous Gordon Bell honorees on its HPC systems. The Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility located at ORNL, manages these systems and makes them available to scientists around the world to accelerate scientific discovery and engineering progress. Consequently, the OLCF has provided the HPC systems for 25 previous Gordon Bell Prize finalists and eight winners, including last year’s team from ETH Zürich.

This year, four projects that used ORNL’s IBM AC922 Summit supercomputer are finalists. The 2020 Gordon Bell Prize will be award November 19 at SC20. Here are the finalists that used Summit.

DeePMD-kit: A New Paradigm for Molecular Dynamics Modeling

“The code produced by Team DeePMD, with its ability to scale to huge numbers of atoms, while retaining chemical accuracy, is poised to transform the field of materials research. Applications to other fields will surely follow.” —Michael Klein, Laura H. Camell Professor of Science, Temple University

Molecular dynamics modeling has become a primary tool in scientific inquiry, allowing scientists to analyze the movements of interacting atoms over a set period of time, which helps them determine the properties of different materials or organisms. These computer simulations often lead the way in designing everything from new drugs to improved alloys. However, the two most popular methodologies come with caveats.

Classical molecular dynamics (MD), using Newtonian physics, can simulate trillions of particles on a modern supercomputer—however, its accuracy for more intricate simulations has limitations. Ab initio (“from the beginning”) molecular dynamics (AIMD), using quantum physics at each time step, can produce much more accurate results—but its inherent computational complexity limits the size and time span of its simulations. But what if there was a way to bridge the gap between MD and AIMD, to produce complex simulations that are both large and accurate?

With the power of ORNL’s Summit supercomputer, researchers from Lawrence Berkeley National Laboratory’s Computational Research Division; the University of California, Berkeley; the Institute of Applied Physics and Computational Mathematics, Peking University; and Princeton University successfully tested a software package that offers a potential solution: DeePMD-kit, named for “deep potential molecular dynamics.”

The team refers to DeePMD-kit as a “HPC+AI+Physical model” in that it combines high-performance computing (HPC), artificial intelligence (AI), and physical principles to achieve both speed and accuracy. It uses a neural network to assist its calculations by approximating the ​ab initio d​ata, thereby reducing the computational complexity from cubic to linear scaling.

Simulating a block of copper atoms, the team put DeePMD-kit to the test on Summit with the goal of seeing how far they could push the simulation’s size and timescales beyond AIMD’s accepted limitations. They were able to simulate a system of 127.4 million atoms—more than 100 times larger than the current state of the art. Furthermore, the simulation achieved a time-to-solution mark of at least 1,000 times faster at 2.5 nanoseconds per day for mixed-half precision, with a peak performance of 275 petaflops (one thousand million million floating-point operations per second) for mixed-half precision.

“By combining physical principles and the representation power of deep neural networks, the Deep Potential method can achieve very good accuracy, especially for complex problems,” said Weile Jia, a postdoc in applied mathematics in Professor Lin Lin’s group at the Math Department of UC Berkeley, who co-led the project with Linfeng Zhang of Princeton. “Then we reorganize the data layout for bigger granularity on GPU and use data compression to significantly speedup the bottleneck. The neural network operators are optimized to the extreme, and most importantly, we successfully use half-precision in our code without losing accuracy.”

Square Kilometre Array: Massive Data Processing to Explore the Universe

“The innovative results already achieved and goals being pursued by this international team will greatly benefit the Next Generation Very Large Array, the Square Kilometre Array, and the next generation of radio interferometer facilities around the world.” —Tony Beasley, Director, National Radio Astronomy Observatory

Scheduled to begin construction in 2021, the Square Kilometre Array (SKA) promises to become one of the biggest “Big Science” projects of all time (in physical size): a radio telescope array with a combined collecting area of over 1 square kilometer, or 1 million square meters. Once completed in the deserts of South Africa and Australia in the late 2020s, SKA’s thousands of dishes and low-frequency antennas will plumb the universe to figure out its mysteries.

SKA’s mission ultimately means it will produce massive amounts of information—an estimated 600 petabytes of data per year. Collecting, storing, and analyzing that data will be critical in producing SKA’s scientific discoveries. How will it be managed?

Building an end-to-end data-processing system on such an unprecedented scale is the task of an international team of radio astronomers, computer scientists, and software engineers. Workflow experts from the International Centre for Radio Astronomy Research (ICRAR) in Australia and the Shanghai Astronomical Observatory (SHAO) in China are developing the Daliuge workflow management system; GPU experts from Oxford University are optimizing the performance of the data generator; and input/output (I/O) experts at ORNL are producing I/O middleware based on the ORNL-developed Adaptable IO System (ADIOS). These three core software packages were completely developed by the team, with the original scope of running on top supercomputers.

Because SKA does not yet exist, its huge data output was simulated on Summit in order to test the team’s work, running a complete end-to-end workflow for a typical 6-hour SKA Phase 1 Low Frequency Array observation. The team used 99 percent of Summit, achieving 130 petaflops peak performance for single-precision, 247 gigabytes per second data generation rate, and 925 gigabytes per second pure I/O rate.

“For the first time, an end-to-end SKA data-processing workflow was executed in a production environment. It helps the SKA community—as well as the entire radio astronomy community—determine critical design factors for multi-billion-dollar next generation radio telescopes,” said Ruonan Wang, a software engineer in ORNL’s Scientific Data Group who works on the project. “It validated our ability, from both software and hardware perspectives, to process a key science case of SKA, which will answer some of the fundamental questions of our universe.”

DSNAPSHOT: An Accelerated Approach to Literature-Based Discovery

“The DSNAPSHOT algorithm approach … enables the identification of meaningful paths and novel relations on a previously unseen scale. Consequently, it moves the biomedical research community closer to a framework for analyzing how novel relations can be identified across the entire body of scientific literature.” —Michael Weiner, PhD, VP AxioMx, Molecular Sciences and Head, Global Research of Abcam

In 1986, the late information scientist Don Swanson introduced the concept of “undiscovered public knowledge” in the field of biomedical research. His idea was both intriguing and straightforward: Out of the millions of published pieces of medical literature, what if there are yet unseen connections between their findings that could lead to new treatments? If, for example, “A affects B” in one study and “B affects C” in another, perhaps A and C have undiscovered commonalities worth investigating. Swanson proved his point by analyzing unrelated papers for such links, leading to hypothetical treatments that were later supported by clinical studies, such as taking magnesium supplements to help prevent migraine headaches. This process became known as “Swanson Linking.”

But in light of the enormous size of scientific literature in existence, mining it for undiscovered connections cannot be effectively conducted on a mass scale by mere humans. For example, the US National Library of Medicine’s PubMed database contains over 30 million citations and abstracts for biomedical literature. How can researchers possibly track that much information in its totality and find the patterns that may help identify new treatments?

One answer may be data-mining algorithms optimized for GPU-accelerated supercomputers such as ORNL’s Summit. When the federal government mobilized its national labs in the fight against COVID-19 in March, a team of ORNL and Georgia Tech researchers was assembled by ORNL computer scientist Ramakrishnan Kannan and Thomas E. Potok, head of ORNL’s Data and AI Systems Section of the Computer Science and Mathematics Division. The team’s mission was to investigate new ways of searching large-scale bodies of scholarly literature—and they ultimately found a way to conduct Swanson Linking on huge datasets at unprecedented speed.

Dasha Herrmannova from Kannan’s team began by creating a graph dataset based on Semantic MEDLINE—a dataset of biomedical concepts and the relations between them—extracted from PubMed. Then they expanded the graph with information extracted from the COVID-19 Open Research Dataset (CORD-19), resulting in a dataset of 18.5 million nodes representing concepts and papers, with 213 million relationships between them.

To search this massive dataset (via knowledge graph representations) for potential COVID-19 treatments, the team developed a new high-performance implementation of the Floyd-Warshall algorithm. The classic algorithm, originally published in 1962, determines the shortest distances between every pair of vertices in a given graph. (In terms of literature-based discovery, the shortest paths are usually more likely to reveal new connections between scholarly works.) Wanting to overcome the computational bottleneck of tackling massive graphs, Kannan, Piyush Sao, Hao Lu, and Robert Patton from ORNL, in collaboration with Vijay Thakkar and Rich Vuduc from Georgia Tech, optimized their version of the algorithm for distributed-memory parallel computers accelerated by GPUs. They named it Distributed Accelerated Semiring All-Pairs Shortest Path (DSNAPSHOT).

In effect, the team’s DSNAPSHOT is a supercharged version of Floyd-Warshall, able to identify the shortest paths in huge graphs in a matter of minutes. Using 90 percent of the Summit supercomputer—or 4,096 nodes, adding up to 24,576 GPUs—the team was able to compute an All-Pairs Shortest Path computation on a graph with 4.43 million vertices in 21.3 minutes. Peak performance reached 136 petaflops for single-precision. If every person on Earth completed one calculation per second, it would take the world’s population (~7 billion) 7 and a half months to complete what DSNAPSHOT can do in 1 second on Summit.

“To the best of our knowledge, DSNAPSHOT is the first method capable of calculating shortest path between all pairs of entities in a biomedical knowledge graph, thereby enabling the discovery of meaningful relations across the whole of biomedical knowledge,” Kannan said. “Looking forward, we believe this novel capability will enable the mining of scholarly knowledge corpora when embedded and integrated into artificial intelligence–driven natural language processing workflows at scale.”

BerkeleyGW: A New View into Excited-State Electrons

“The BerkeleyGW team’s demonstration of excited-state calculations with the GW method for 1,000-atom systems on accessible HPC facilities will be a game-changer. Researchers with diverse interests will be able to pursue fundamental understanding of excited states and physical processes in materials systems including novel two-dimensional semiconductors, electrochemical interfaces, organic molecular energy harvesting systems, and materials proposed for quantum information systems.” —Mark S. Hybertsen, Group Leader, Theory & Computation Group Center for Functional Nanomaterials, Brookhaven National Laboratory

Historical epochs are often delineated by the materials that helped shape civilization, from the Stone Age to the Steel Age. Our current period is often referred to as the Silicon Age—but while those earlier eras were characterized by the structural properties of their predominant materials, silicon is different. Rather than ushering in new ways of building big things, its technological leap takes place on an atomic level, facilitating an information revolution.

Used as the main material in integrated circuits (AKA, the microchip), silicon has enabled the world of data processing we currently live in, from ever-more-powerful computers to unavoidable handheld devices. Central to its success has been the ability of chip designers to engineer these circuits to be increasingly faster and smaller, yet with more capacity as they add more and more transistors. But can microprocessor architects keep up with Moore’s law and continue to double the number of transistors in an integrated circuit every 2 years?

One route forward may be found in the work of a team of six physicists, materials scientists, and HPC specialists from the Berkeley Lab, UC Berkeley, and Stanford University that performed the largest-ever study of “excited-state” electrons using ORNL’s Summit supercomputer. Understanding and controlling such electronic excitation in silicon and other materials is key to designing the electronic and optoelectronic devices that have sparked the current information era. What’s more, the accurate modeling of excited-state properties of electrons in materials plays a crucial role in the rational design of other transformative technologies, including photovoltaics, batteries, and qubits for quantum information and quantum computing. In essence, the team’s high-performance calculations could help design new materials for these next generation technologies.

A state-of-the-art tool for determining excitations in materials is the “GW method,” an approach for calculating the self-energy (the quantum energy that a particle acquired from interactions with its surrounding environment) of a system of interacting electrons. The team adapted its own software package: BerkeleyGW—a quantum many-body perturbation theory code for excited states—to run on Summit’s GPU accelerators.

The team’s study of a system of defects in silicon and silicon carbide resulted in groundbreaking performance: the largest high-fidelity GW calculations ever made, with 10,986 valence electrons. By running on the entire Summit supercomputer, they also achieved 105.9 petaflops of double-precision performance with a time to solution of roughly 10 minutes.

“What’s really exciting about these numbers is that together they usher in the practical use of the high-fidelity GW method to the study of realistic complex materials,” said Jack Deslippe, team leader and head of the Applications Performance Group at the National Energy Research Scientific Computing Center, or NERSC. “These will be materials with defects, with interfaces, and with large geometries that drive real device design in quantum information, energy generation and storage, and next-gen electronics.”

UT-Battelle LLC manages Oak Ridge National Laboratory for DOE’s Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOE’s Office of Science is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.

More info: https://www.olcf.ornl.gov/2020/11/10/four-teams-using-ornls-summit-supercomputer-named-finalists-in-2020-gordon-bell-prize/


Source: COURY TURCZYN, ORNL

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Energy Exascale Earth System Model Version 2 Promises Twice the Speed

October 18, 2021

The Energy Exascale Earth System Model (E3SM) is an ongoing Department of Energy (DOE) earth system modeling, simulation and prediction project aiming to “assert and maintain an international scientific leadership posi Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

NSF Awards $11M to SDSC, MIT and Univ. of Oregon to Secure the Internet

October 14, 2021

From a security standpoint, the internet is a problem. The infrastructure developed decades ago has cracked, leaked and been patched up innumerable times, leaving vulnerabilities that are difficult to address due to cost Read more…

SC21 Announces Science and Beyond Plenary: the Intersection of Ethics and HPC

October 13, 2021

The Intersection of Ethics and HPC will be the guiding topic of SC21's Science & Beyond plenary, inspired by the event tagline of the same name. The evening event will be moderated by Daniel Reed with panelists Crist Read more…

AWS Solution Channel

Cost optimizing Ansys LS-Dyna on AWS

Organizations migrate their high performance computing (HPC) workloads from on-premises infrastructure to Amazon Web Services (AWS) for advantages such as high availability, elastic capacity, latest processors, storage, and networking technologies; Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Royalty-free stock illustration ID: 1938746143

MosaicML, Led by Naveen Rao, Comes Out of Stealth Aiming to Ease Model Training

October 15, 2021

With more and more enterprises turning to AI for a myriad of tasks, companies quickly find out that training AI models is expensive, difficult and time-consuming. Finding a new approach to deal with those cascading challenges is the aim of a new startup, MosaicML, that just came out of stealth... Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Eni Returns to HPE for ‘HPC4’ Refresh via GreenLake

October 13, 2021

Italian energy company Eni is upgrading its HPC4 system with new gear from HPE that will be installed in Eni’s Green Data Center in Ferrera Erbognone (a provi Read more…

The Blueprint for the National Strategic Computing Reserve

October 12, 2021

Over the last year, the HPC community has been buzzing with the possibility of a National Strategic Computing Reserve (NSCR). An in-utero brainchild of the COVID-19 High-Performance Computing Consortium, an NSCR would serve as a Merchant Marine for urgent computing... Read more…

UCLA Researchers Report Largest Chiplet Design and Early Prototyping

October 12, 2021

What’s the best path forward for large-scale chip/system integration? Good question. Cerebras has set a high bar with its wafer scale engine 2 (WSE-2); it has 2.6 trillion transistors, including 850,000 cores, and was fabricated using TSMC’s 7nm process on a roughly 8” x 8” silicon footprint. Read more…

What’s Next for EuroHPC: an Interview with EuroHPC Exec. Dir. Anders Dam Jensen

October 7, 2021

One year after taking the post as executive director of the EuroHPC JU, Anders Dam Jensen reviews the project's accomplishments and details what's ahead as EuroHPC's operating period has now been extended out to the year 2027. Read more…

University of Bath Unveils Janus, an Azure-Based Cloud HPC Environment

October 6, 2021

The University of Bath is upgrading its HPC infrastructure, which it says “supports a growing and wide range of research activities across the University.” Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

Leading Solution Providers

Contributors

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make i Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire