HPC Serves as a ‘Rosetta Stone’ for the Information Age

By Warren Froelich

July 12, 2018

Today high-performance computing is at the forefront of a new gold rush, a rush to discovery using an ever-growing flood of information and data. Computing is now essential to science discovery like never before. We are the modern pioneers pushing the bounds of science for the betterment of society. — SC17 General Chair Bernd Mohr, Jülich Supercomputing Centre 

In an age defined and transformed by its data, several large-scale scientific instruments around the globe might be viewed as a mother lode of precious data.

With names seemingly created for a techno-speak glossary, these interferometers, cyclotrons, sequencers, solenoids, satellite altimeters, and cryo-electron microscopes are churning out data in previously unthinkable and seemingly incomprehensible quantities — billions, trillions and quadrillions of bits and bytes of electro-magnetic code.

Like the famed Rosetta Stone that enabled Ancient Egyptian inscriptions to be decoded, high-performance computing transforms 21st digital data into valuable insight. Image credit: Olaf Herrmann

Yet, policy-makers from the National Science Foundation (NSF) and others plotting future directions in science believe that hidden within these veritable mountain-sized mines of information are clues to questions that have confounded humanity since their first thoughts: answers about those bits of glitter in the night sky, the nature of matter, the causes of disease, the origins of life and even why and how we think about such things.

For this reason, the ability to convert this seemingly unintelligible digital data into rapid, meaningful discoveries has taken on added significance. Indeed, one of the NSF’s 10 Big Ideas for the future includes “Harnessing Data for the 21st Century Science and Engineering.”

Enter advanced or high-performance computing (HPC) which sifts and separates waste from valuable digital nuggets and, somewhat like a Rosetta Stone of the information age, decodes and translates this data into valuable insight.

“Advanced computing, along with experts charged with building and making the most of these HPC systems, has been critical to many Nobel Prizes, including work involving traditional modeling and simulation, to projects designed for more data-intensive workloads,” said Michael Norman, director of the San Diego Supercomputer Center (SDSC) at UC San Diego.

As evidence, Norman and others point to several recent Nobel Prizes in chemistry and physics — including international collaborations exploring the dark side of the universe and others delving into the dynamics of proteins critical for tomorrow’s targeted therapies.

Each has relied on the marriage of supercomputing technology and expertise with large-scale scientific instruments to achieve their goals, all connected by faster and faster high-speed communications networks. And each touches on other Big Ideas from the NSF, such as “The Era of Multi-Messenger Astrophysics” that include a collection of approaches to expand our observations and understandings of the universe; a “Quantum Leap” into the understanding the behavior of matter and energy at very small – atomic and subatomic – scales; and “Understanding the Rules of Life”, an initiative that will require convergence of research across biology, computer science, mathematics, behavioral sciences, and engineering.

SDSC’s Petascale Comet Supercomputer. Credit: Ben Tolo, SDSC

Some of this effort is based on the solution of fundamental mathematical equations to create models or simulations using HPC systems now capable of generating quadrillions of calculations per second, such as Comet, funded by the NSF and housed at SDSC. Other HPC research requires the access, analysis, and interpretation of previously unfathomable amounts of data via a modality called high-throughput computing (HTC) being generated from a wide cross-section of sensors and detectors. Simulation and data analysis along with experimentation sometimes complement and even blend with one another for discovery.

“HTC is a way of consuming computer resources, including those we label as HPC,” said Frank Würthwein, professor of physics at UC San Diego and Distributed High-Throughput Computing Lead at SDSC. “The way these large-scale instruments do analysis requires the HTC ‘modality’ of computing. This is distinct from the standard ‘submit a job to the queue’ which is what people traditionally do for simulations.”

An Integrated Data Ecosystem

Those on the technological front line recognize that the challenges to keep up with the data explosion are enormous. Among other things, much of the science requires the integration of computational resources in an ecosystem that includes sophisticated workflow tools to orchestrate complex pathways for scheduling, data transfer, and processing. Massive sets of data collected through these efforts also require tools and techniques for filtering and processing, plus analytical techniques to extract key information. Moreover, the system needs to be effectively automated across different types of resources, including instruments and data archives.

Some suggest that all these components should be orchestrated into what’s being called a “super facility.” The goal, according to the U.S. Department of Energy, is to bring together users at multiple institutions “allowing geographically dispersed collaborators to tap into scientific resources and expertise, and analyze and share data with other users—all in real time and without having to leave the comfort of their office or lab.”

Said Würthwein: “These large-scale scientific instruments depend on large international cyberinfrastructures that a ‘super facility’ must integrate into seamlessly. The HPC system cannot be an island unto itself.”

The NSF concurs. “The grand challenges of today – protecting human health, understanding the food, energy, water nexus; exploring the universe on all scales – will not be solved by one discipline alone,” the agency stated in a 2017 report prepared for Congress. “They require convergence: the merging of ideas, approaches, and technologies from widely diverse fields of knowledge to stimulate innovation and discovery.”

Armed with ever-more powerful large-scale scientific instruments, research teams around the globe – some encompassing a wide variety disciplines – already are converging to build an impressive portfolio of scientific advances and discoveries, with supercomputers serving as critical linchpin for all these investigations.

Cosmic Discoveries

On July 4, 2012, at the CERN laboratory for particle physics outside Geneva, Switzerland, a theory first proposed in 1964 by François Englert and Peter W. Higgs was confirmed with the discovery of a Higgs particle. The theory, which garnered the duo the 2013 Nobel Prize in physics, is a central part of the Standard Model of particle physics that describes how the world is constructed at its most fundamental level, from the intense waves of energy and primordial particles released from the “Big Bang,” to the planet we inhabit, to those glittering specks of light we observe in the night sky.

The Compact Muon Solenoid (CMS) is a general-purpose detector at the Large Hadron Collider (LHC), which is the world’s largest and most powerful particle accelerator. Courtesy CERN.

Under a partnership with UC San Diego physicists and the Open Science Grid (OSG), a multi-disciplinary research partnership funded by the U.S. Department of Energy and the NSF, SDSC’s Gordon supercomputer provided auxiliary computing capacity to process massive raw data generated by the Compact Muon Solenoid (CMS) — one of two general purpose particle detectors at the Large Hadron Collider (LHC). LHC experiments are among the largest ever seen in physics, with each experiment involving collaborations of close to 200 institutions in more than 40 countries, involving in excess of 3,000 scientists and engineers.

“Access to Gordon, and its excellent computing speed due to its flash-based memory, really helped push forward the processing schedule for us,” said Würthwein, a member of the CMS project and executive director of OSG “This was one of the first ever integrations of HTC with a large HPC system and with only a few weeks’ notice, we were able to gain access to Gordon and complete the runs, making the data available for analysis in time to provide crucial input toward the international planning meetings on the future of particle physics.”

In February 2016, an international team representing more than 20 countries announced the first-ever detection of gravitational waves in the universe, based on the tell-tale “chirp” signature of two black holes merging about 1.3 billion years ago. The collision sent what some referred to as a “ripple in the fabric of space time”: gravitational waves, hypothesized by Albert Einstein a century ago. The signal was detected on earth, first by the NSF-funded Laser Interferometer Gravitational Wave Observatory (LIGO) near Livingston, Louisiana; and then seven milliseconds later, and 1,890 miles away, at the second LIGO interferometer in Hanford, Washington. Three members of the team won the 2017 Nobel Prize in Physics for the discovery.

LIGO operates two detector sites — one near Hanford in eastern Washington, and another near Livingston, Louisiana. The Livingston detector site is pictured here. Courtesy LIGO Collaboration.

SDSC’s Comet was one of several supercomputers used by researchers to confirm the landmark discovery.

“LIGO’s discovery of gravitational waves from the binary black hole required large-scale data analysis to validate the discovery claim,” said Duncan Brown, The Charles Brightman Professor of Physics at Syracuse University’s Department of Physics who studies gravitational waveforms for black holes and neutron star binaries. “This includes measuring how significant the signal is compared to noise in the detector, and re-analyzing the data with simulated signals to ensure that we understand the astrophysical sensitivity of the search. Comet’s computer cycles were extremely important for us to complete large-scale simulations and fast validation of the search.”

Less than a year after the first discovery of gravitational waves, in October 2017 researchers announced they had detected gravitational waves generated by the collision of two neutron stars more than 130 light years from earth, via the two LIGO instruments and the Europe-based Virgo interferometer, followed shortly by multiple telescopes and satellites built to capture light from the universe. This combination of observational instruments bears testimony to what’s become known as multi-messenger astronomy (MMA), where multiple instruments — built to detect different forms of electromagnetic radiation – are choreographed with one another, essentially in real time, to view the same patch of sky. Once again, Comet was one of several HPC systems to verify the signal, with allocations from NSF’s Extreme Science and Engineering Discovery Environment (XSEDE) and the OSG.

“The correlation of the three interferometers, 2 from LIGO and one from Virgo significantly shrunk the area in the sky for where to look,” said Würthwein.

Added Syracuse University’s Brown: “Comet’s contribution through the OSG and XSEDE allowed us to rapidly turn around the offline analysis in about a day. That, in turn allowed us to do several one-day runs, as opposed to having to spend several weeks before publishing our findings.”

This image shows a high-energy neutrino event superimposed on a view of the IceCube Lab (ICL) at the South Pole. Courtesy IceCube Collaboration.

Since being postulated in December 1930 by Wolfgang Pauli, cosmologists have been hunting for neutrinos: subatomic particles that lack an electric charge, particles once described as “the most tiny quantity of reality ever imagined by a human being.” For the most part, cosmic neutrinos are believed to have been created about 15 billion years ago, soon after the birth of the universe. Others emerged more recently from some of the most violent actions in the universe, such as exploding stars, gamma ray bursts, black holes and neutron stars. But unlike photons and other charged particles, neutrinos can emerge from their sources and, like cosmological ghosts, pass through the universe unscathed.

To help catch these near-massless messengers from deep space, an international team of researchers funded by the NSF set up IceCube, an observatory containing an array of 5,160 optical sensors deep within a cubic kilometer of ice at the South Pole. Encompassing 300 physicists from 49 institutions in 12 countries, IceCube already has achieved its primary goal of detecting the extraterrestrial flux of very high-energy neutrinos.

Frank Halzen, principal investigator of the IceCube Observatory and physics professor at the University of Wisconsin-Madison, explained the importance of the Comet supercomputer for isolating the signature pattern of neutrinos:  “The IceCube neutrino detector transforms natural Antarctic ice at the South Pole into a particle detector. Progress in understanding the precise optical properties of the ice leads to increasing complexity in simulating the propagation of photons in the instrument and to a better overall performance of the detector.”

“The photon propagation in the ice is very well-suited to run in graphics processing units (GPUs) hardware, such as those on Comet.” Halzen continued. “Pursuing efficient access to a large amount of GPU computing power is therefore of great importance to ensure that future IceCube analysis reaches the maximum precision and that the full scientific potential of the instrument is exploited.”

Stay tuned for Part II

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has been unveiled in upstate New York that will be used by IBM Read more…

By Doug Black

At SC19: Developing a Digital Twin

December 11, 2019

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location. In such a world, there will also be a digita Read more…

By Aaron Dubrow

Supercomputers Help Predict Carbon Dioxide Levels

December 10, 2019

The Earth’s terrestrial ecosystems – its lands, forests, jungles and so on – are crucial “sinks” for atmospheric carbon, holding nearly 30 percent of our annual CO2 emissions as they breathe in the carbon-rich Read more…

By Oliver Peckham

Finally! SC19 Competitors Live and in Color!

December 10, 2019

You know the saying “better late than never”? That’s how my cluster competition coverage is faring this year. With SC19 coming late in November, quickly followed by my annual trip to South Africa to cover their clu Read more…

By Dan Olds

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

December 9, 2019

Intel today introduced the ‘first-of-its-kind’ cryo-controller chip for quantum computing and previewed a cryo-prober tool for characterizing quantum processor chips. The new controller is a mixed-signal SoC named Ho Read more…

By John Russell

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

GPU Scheduling and Resource Accounting: The Key to an Efficient AI Data Center

[Connect with LSF users and learn new skills in the IBM Spectrum LSF User Community!]

GPUs are the new CPUs

GPUs have become a staple technology in modern HPC and AI data centers. Read more…

What’s New in HPC Research: Natural Gas, Precision Agriculture, Neural Networks and More

December 6, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has Read more…

By Doug Black

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

December 9, 2019

Intel today introduced the ‘first-of-its-kind’ cryo-controller chip for quantum computing and previewed a cryo-prober tool for characterizing quantum proces Read more…

By John Russell

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has n Read more…

By Doug Black

Ride on the Wild Side – Squyres SC19 Mars Rovers Keynote

December 2, 2019

Reminding us of the deep and enabling connection between HPC and modern science is an important part of the SC Conference mission. And yes, HPC is a science its Read more…

By John Russell

NSCI Update – Adapting to a Changing Landscape

December 2, 2019

It was November of 2017 when we last visited the topic of the National Strategic Computing Initiative (NSCI). As you will recall, the NSCI was started with an Executive Order (E.O. No. 13702), that was issued by President Obama in July of 2015 and was followed by a Strategic Plan that was released in July of 2016. The question for November of 2017... Read more…

By Alex R. Larzelere

Tsinghua University Racks Up Its Ninth Student Cluster Championship Win at SC19

November 27, 2019

Tsinghua University has done it again. At SC19 last week, the eight-time gold medal-winner team took home the top prize in the 2019 Student Cluster Competition Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
CEJN
CJEN
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This