Convergence of HPC, AI and Cloud Computing Charted at PEARC19 Keynote

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

August 1, 2019

A trio of keynote presentations from Intel, Google and Microsoft at the PEARC19 conference in Chicago on July 31 charted out the likely future of academic and high-performance computing in the cloud. While each company and presenter carried a distinct message about the opportunities and challenges to moving more open research to cloud services, each also held that cloud providers are learning from the HPC community and adjusting their products and models to make the transition more attractive.

PEARC19, in progress in Chicago this week (July 28-Aug. 1), explores current practice and experience in advanced research computing including modeling, simulation and data-intensive computing. The primary focus this year is on machine learning and artificial intelligence. The PEARC organization coordinates the PEARC conference series to provide a forum for discussing challenges, opportunities and solutions among the broad range of participants in the research computing community.

Redefining HPC

In her presentation “Redefining HPC,” Patricia Damkroger of Intel looked at the paradigm shift that’s moving data analytics and AI into the cloud.

Source: PEARC

“We’ve talked about HPC going to the cloud for at least a decade,” she said. “It’s still not mainstream, but I think that’s changing … The biggest driver is data.”

For varied reasons, she explained, organizations as different as CERN and the Department of Defense have found loading data into the cloud to be a useful expansion of their internal compute capacities that allows collaborative access and maintains internal security, respectively.

Data are also a central need in AI, in which training data have become massive and the infrastructure required for transparency and accuracy expand. AI and HPC, she argued, are converging—or at least ought to.

“We need … to know what the AI is doing to the data. We also need to make sure we have review boards and security built in … The other thing we really need is the inclusive part,” she said citing the problem that much medical research has not been gender or race inclusive and so the results don’t always fully represent the patient population. “AI is going to have to have that full data, or it’s not going to be accurate.” She cited San Diego Supercomputer Center’s Expanse, the Texas Advanced Computing Center’s Frontera and Pittsburgh Supercomputing Center’s (PSC’s) Bridges-2 as examples of upcoming systems that will play roles in this convergence.

Damkroger shared the podium with Nick Nystrom of PSC, who gave the audience the first public presentation of the center’s new Bridges-2 system. The NSF announced the award for Bridges-2 in June. Bridges-2, built in collaboration with HPE, will feature Intel’s 10nm Ice Lake processor along with other Intel CPUs.

“We’ve been working on this for a while,” he said. “This [system] was a convergence of HPC, AI and data.” Designed for use by “new community” researchers with little or no computing experience and employing the first instance of Intel’s Omni-Path Architecture, Bridges-2’s predecessor, Bridges-1, runs common applications that make it cloud-friendly. The system, Nystrom added, is able to run HPC modeling and simulation alongside common tools such as Jupyter as well as Spark and big-data workflows, bridging work that requires the strengths of HPC and cloud. Bridges-2 will expand on that capability.

Future Is HPC in the Cloud

Google’s Ross Thomson’s keynote “Future Is HPC in the Cloud” surveyed the company’s offerings via Google Cloud Platform to enable true HPC in the cloud.

“There’s always a place for the giant computers people use to do massive simulations” for users with $100 million to fund top-500 systems, he said. But for users—or collections of users—who don’t need such a large system, “you can get a lot of computing done for $100 million on Google Cloud.”

He cited Google Cloud’s capability to provide virtual systems configured to each user’s required size, enabling them to scale up or even scale down without losing their investment as their needs change. HPC in the cloud, he added, can accelerate discovery by reducing queue wait times for large-batch workloads as well as relieve compute-resource limitations.

Are There Closets in the Cloud?

In “Are There Closets in the Cloud?” Microsoft’s Tim Carroll charted the history of academic clusters from dozens of systems in literal closets spread across campuses to the sophisticated—and in many ways optimized—campus systems now in operation. He noted that while some 70 percent of academic HPC centers employ cloud computing, only 10% of their jobs run in the cloud.

Source: PEARC

“The idea is to get more tools in more people’s hands, so that they can do good things with them,” Carroll said. For that to happen, both HPC and cloud providers will need to make cultural changes. “One of the things [in which] I think the cloud providers have done tremendous disservice to ourselves and the community is time and cost being the only metrics that matter in this space.” In some cases, they are; but in public research, ownership over systems, dual use in computer and domain science and different funding models than in the public sector can all make that simple calculus inaccurate.

“All of these machines serve a dual purpose and are not simply a utility,” he said. “One cannot underestimate the impact of that.”

Among others, Carroll cited the National Oceanic and Atmospheric Administration (NOAA)—which employs some of the most powerful HPC systems in the world. NOAA is seeking to move its global weather forecast code and capabilities into the cloud. This allows outside collaborators and even citizen scientists open access to spur innovation.

“The tipping point was access, not price,” Carroll said. “[The] evolution and revolution is about opening up computation to domains of science that have never had access before … That’s a really important point to consider when we get a little wrapped around the axle these days about whether the cloud is right for HPC.”

Carroll recommended that HPC users carry out four activities to chart out how they can determine the cost of using the cloud. Step one is to plan, identifying and inventorying workloads that might run well in the cloud. Second, running both obvious workloads and those that may not run as well will generate real data to provide realistic performance expectations. Collaborating with cloud providers can help smooth out cultural differences and produce more accurate estimates. And finally, cost estimation should come at the end of the process rather than the beginning, because workflows drive the true cost.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energetic effort,” IBM Research wrote in a blog post. “Therefor Read more…

By Oliver Peckham

Focused on ‘Silicon TAM,’ Intel Puts Gary Patton, Former GlobalFoundries CTO, in Charge of Design Enablement

December 12, 2019

Change within Intel’s upper management – and to its company mission – has continued as a published report has disclosed that chip technology heavyweight Gary Patton, GlobalFoundries’ CTO and R&D SVP as well a Read more…

By Doug Black

Quantum Bits: Rigetti Debuts New Gates, D-Wave Cuts NEC Deal, AWS Jumps into the Quantum Pool

December 12, 2019

There’s been flurry of significant news in the quantum computing world. Yesterday, Rigetti introduced a new family of gates that reduces circuit depth required on some problems and D-Wave struck a deal with NEC to coll Read more…

By John Russell

How Formula 1 Used Cloud HPC to Build the Next Generation of Racing

December 12, 2019

Formula 1, Rob Smedley explained, is maybe the biggest racing spectacle in the world, with five hundred million fans tuning in for every race. Smedley, a chief engineer with Formula 1’s performance engineering and anal Read more…

By Oliver Peckham

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has been unveiled in upstate New York that will be used by IBM Read more…

By Doug Black

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

GPU Scheduling and Resource Accounting: The Key to an Efficient AI Data Center

[Connect with LSF users and learn new skills in the IBM Spectrum LSF User Community!]

GPUs are the new CPUs

GPUs have become a staple technology in modern HPC and AI data centers. Read more…

At SC19: Developing a Digital Twin

December 11, 2019

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location. In such a world, there will also be a digital twin for each UAV in the fleet: a virtual model that will follow the UAV through its existence, evolving with time. Read more…

By Aaron Dubrow

Focused on ‘Silicon TAM,’ Intel Puts Gary Patton, Former GlobalFoundries CTO, in Charge of Design Enablement

December 12, 2019

Change within Intel’s upper management – and to its company mission – has continued as a published report has disclosed that chip technology heavyweight G Read more…

By Doug Black

Quantum Bits: Rigetti Debuts New Gates, D-Wave Cuts NEC Deal, AWS Jumps into the Quantum Pool

December 12, 2019

There’s been flurry of significant news in the quantum computing world. Yesterday, Rigetti introduced a new family of gates that reduces circuit depth require Read more…

By John Russell

RPI Powers Up ‘AiMOS’ AI Supercomputer

December 11, 2019

Designed to push the frontiers of computing chip and systems performance optimized for AI workloads, an 8 petaflops (Linpack) IBM Power9-based supercomputer has Read more…

By Doug Black

At SC19: Developing a Digital Twin

December 11, 2019

In the not too distant future, we can expect to see our skies filled with unmanned aerial vehicles (UAVs) delivering packages, maybe even people, from location to location. In such a world, there will also be a digital twin for each UAV in the fleet: a virtual model that will follow the UAV through its existence, evolving with time. Read more…

By Aaron Dubrow

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

December 9, 2019

Intel today introduced the ‘first-of-its-kind’ cryo-controller chip for quantum computing and previewed a cryo-prober tool for characterizing quantum proces Read more…

By John Russell

On the Spack Track @SC19

December 5, 2019

At the annual supercomputing conference, SC19 in Denver, Colorado, there were Spack events each day of the conference. As a reflection of its grassroots heritage, nine sessions were planned by more than a dozen thought leaders from seven organizations, including three U.S. national Department of Energy (DOE) laboratories and Sylabs... Read more…

By Elizabeth Leake

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

AWS Debuts 7nm 2nd-Gen Graviton Arm Processor

December 3, 2019

The “x86 Big Bang,” in which market dominance of the venerable Intel CPU has exploded into fragments of processor options suited to varying workloads, has n Read more…

By Doug Black

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
CEJN
CJEN
DDN
DDN
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

IBM Opens Quantum Computing Center; Announces 53-Qubit Machine

September 19, 2019

Gauging progress in quantum computing is a tricky thing. IBM yesterday announced the opening of the IBM Quantum Computing Center in New York, with five 20-qubit Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This