Edge to Exascale: A Trend to Watch in 2022

By Tiffany Trader

January 5, 2022

Edge computing is an approach in which the data is processed and analyzed at the point of origin – the place where the data is generated. This is done to make data more accessible to end-point devices, or users, and to reduce the response time for data requests. HPC-class computing and networking technologies are critical to many edge use cases, and the intersection of HPC and ‘edge’ promises to be a hot topic in 2022. In this Q&A, Hyperion Research Senior Adviser Steve Conway describes the characteristics of edge computing and its relationship with HPC, including the edge-to-exascale paradigm.

HPCwire: There seems to be a growing buzz about edge computing in HPC circles. In fact, you mentioned it in HPCwire’s “See what we see at SC” video

Steve Conway, Hyperion

Steve Conway: Over time, the rise of edge computing could have a major impact on the global HPC community, both on premises and in the cloud. Edge computing is creating an important opportunity for HPC that users and vendors are starting to build into their plans. These are sometimes labeled “edge-to-exascale” strategies, or for the more excitable, “metaverse” strategies.

HPCwire: What is edge computing? How do you define it?

Conway: Edge computing is a relatively new form of distributed computing where much or all of the computation is done directly on or near data sources. This contrasts with the historical practices for distributed systems that typically involve sending all source data to distant centralized datacenters or cloud computing platforms. In many cases, the limited computing power available at or near the data sources is adequate; and in cases where deeper analysis is required, typically only a small subset of edge computing results needs to be sent to datacenters or clouds or containers for processing on more powerful computers.

Edge data sources roughly correspond to the Internet of Things devices and include vehicles and traffic sensors, medical devices, product manufacturing lines, military sites, and many other data-generating sources. The IoT is no longer just an “Internet of stupid things” such as home appliances. Today’s IoT also includes sophisticated devices that generate much larger, more complex data. And the edge isn’t always earthbound; satellites and spacecraft also generate a lot of complex data that needs processing.


“The IoT is no longer just an “Internet of stupid things” such as home appliances. Today’s IoT also includes sophisticated devices that generate much larger, more complex data. And the edge isn’t always earthbound; satellites and spacecraft also generate a lot of complex data that needs processing.”


HPCwire: What are the main advantages and disadvantages of edge computing?

Conway: Most edge attributes are advantages, which is why Hyperion Research and others expect robust growth for edge computing. The main advantages are faster results, lower costs, higher autonomy and reliability, greater privacy protection and security and, most important of all, scalability.

Edge computing’s low latency enables faster responses to events in the field, such as identifying traffic violators, shoplifters and cyber criminals in time for apprehension, giving weather forecasters extra minutes to alert local communities to severe storms, or allowing cities and towns to re-route traffic before serious congestion happens. Processing all or most data at the edge can also substantially reduce the cost of network services and storage in clouds and datacenters. Because edge systems are not typically shared resources, they can offer greater privacy than long-distance networks. Also, since less data is transmitted from edge systems, the security and regulatory policies that are most appropriate for a user’s organizational business or occupation can be implemented at the edges.

But the single greatest benefit of edge computing is scalability—the ability to efficiently handle growth in the volume of source data. Without edge computing, centralized facilities might need to become prohibitively large and expensive as the number of edge devices and the volume of source data grow. Edge computing’s reliance on small, modular data processing units, close to data sources, means that edge computing initiatives can often expand cost-effectively to nearly unlimited numbers of these units.

HPCwire: What about edge data security?

Conway: Where data security is concerned, there are advantages and disadvantages today. On the plus side, large amounts of data are more difficult to steal from many edge locations than from one central server, small data processed at the edge is usually less mission-critical than data sent to central servers, and keeping most data at the edge makes central servers less likely to be attacked. On the minus side, edge devices may not be designed or tested with cyber security in mind, loopholes and vulnerabilities in edge security may provide network access to central servers, and edge devices may be physically small enough to steal or manipulate.

HPCwire: What do you see as HPC’s role in edge computing? How do HPC and edge intersect and what is enabled by that intersection?

Conway: There’s an understandable tendency in the HPC and larger IT communities to see edge computing as an extension of what happens in datacenters and clouds. But it’s also important to view things from the perspective of the edge, where a large majority of the data may be transient, frequently overwritten, and never need more powerful resources in datacenters and clouds.

From this perspective, HPC has a crucial role to play in the important subset of edge computing applications that need wide-area analysis and control, as opposed to just local responsiveness. A large portion of the onetime Top500-leading Tianhe-1a supercomputer, for example, was dedicated to urban traffic management in Guangzhou. Hyperion Research and other experts who follow this closely believe HPC may be the glue that unifies the emerging global IT infrastructure, from edge to exascale.

HPCwire: Is this a new role for HPC?

Conway: HPC is no stranger to processing “big data” aggregated from many local sources for wide-area analysis, a forerunner of edge computing. Prominent examples include numerical weather forecasting based on high-volume data supplied by local human observers and sensor-bearing weather balloons, monitoring of telecommunications by government agencies around the world, and mosaicing of satellite images by space agencies to produce virtual flyovers of earth and other planets, to name a few.

HPCwire: What are some emerging edge applications that will need HPC support?

Conway: Many existing HPC-supported science and engineering applications will benefit from increasing use of edge data. On the commercial and dual-use sides, besides urban traffic management and related automated driving systems, other prominent edge use cases for HPC include precision medicine, fraud and anomaly detection, business intelligence, smart cities development and affinity marketing. Hyperion Research’s recently completed in-depth study of the worldwide HPC market found that 80 percent of the surveyed HPC sites run or plan to run one or more of these applications, which often combine simulation with AI methodologies. So, the HPC edge trend is well under way.

HPCwire: A lot of what happens at the edge involves data. How does AI figure into edge computing and HPC’s role in it?

Conway: HPC is at the forefront of AI R&D today and AI methods will be crucially important for HPC’s role in edge computing, but both data-intensive simulation and data-intensive analytics will be needed, sometimes in combination to support the same workload. HPC’s role in edge computing is based mainly on ultrafast computing, ultrafast data movement, and ultralarge and capable memory and storage systems.

HPCwire: Will edge computing affect the roles of HPC vendors? In what ways?

Conway: Edge computing promises to reorient the global IT infrastructure and affect the roles of HPC system vendors, cloud services providers (CSPs), networking and storage suppliers, and others. HPC has been a self-contained niche market, but the edge computing opportunity will pull HPC into the larger IT mainstream for an important but limited role at the top of the edge-to-exascale food chain. Leading HPC vendors are already starting to exploit this new opportunity, not only in datacenters and clouds, but also with HPC containers close to edge locations. Distances from the edge and latencies are going to be important for determining the roles of HPC and other edge computing resources. One challenge for HPC vendors is achieving greater integration with the mainstream IT market and supporting open standards that can apply from edge to exascale. For leading HPC vendors, this might mean working more closely with business units within their own companies that serve the mainstream market.


“HPC has been a self-contained niche market, but the edge computing opportunity will pull HPC into the larger IT mainstream for an important but limited role at the top of the edge-to-exascale food chain. Leading HPC vendors are already starting to exploit this new opportunity, not only in datacenters and clouds, but also with HPC containers close to edge locations.”


HPCwire: What other companies should we be paying attention to?

Conway: HPCwire already has most of the bases covered, in my opinion, by tracking today’s important HPC vendors. I think companies involved in networking, including 5G and 6G, will be useful to watch. So will organizations pursuing open standards, cost- and energy-efficient processors, cyber security and distributed computing benchmarks.

HPCwire: Major HPC vendors – and more broadly a number of tech giants – are using terms like “metaverse” and “omniverse.” To what extent are those concepts related to edge computing and HPC?

Conway: As I’ve seen these terms used, they usually refer to an immersive environment that enables people to experience the world as a virtual or augmented reality. These experiences available to many people will need to happen mostly at the edge, using edge and near-edge resources that might sometimes include HPC containers. I see HPC having an important R&D role in developing this environment and the more challenging experiences. Companies including SGI and Cray created some impressive HPC-enabled synthetic reality experiences in the 1990s, including flyovers and 3D virtual tours and training modules for captains in shipping companies.

HPCwire: What is the market opportunity for “edge computing” and – more relevant to our readers – what impact will the advance of edge computing have on the HPC market?

Conway: It’s safe to say that the edge computing opportunity will add revenue to the HPC market, but it’s too early to quantify that. The whole edge ecosystem needs time to come together.

HPCwire: Another topic with a lot of buzz right now is composable computing. Is there a link between composable and edge computing?

Conway: Sure. As the requirements for HPC systems become more heterogeneous, it becomes more difficult, technically and economically, to satisfy them effectively with a single, monolithic architecture. You risk frequently wasting HPC resources, over- or under-provisioning specific capabilities for the workloads in use. HPC vendors are wrestling with this issue, which isn’t an easy one to resolve quickly.


Bio: Steve Conway is Senior Adviser of HPC Market Dynamics at Hyperion Research. Conway directs research related to the worldwide market for high performance computing. He also leads Hyperion Research’s practice in high performance data analysis (big data needing HPC).

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Meta’s Massive New AI Supercomputer Will Be ‘World’s Fastest’

January 24, 2022

Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called the AI Research SuperCluster (RSC). Meta says that RSC will b Read more…

Supercomputer Analysis Shows the Atmospheric Reach of the Tonga Eruption

January 21, 2022

On Saturday, an enormous eruption on the volcanic islands of Hunga Tonga and Hunga Haʻapai shook the Pacific Ocean. The explosion, which could be heard six thousand miles away in Alaska, caused tsunamis across the entir Read more…

NSB Issues US State of Science and Engineering 2022 Report

January 20, 2022

This week the National Science Board released its biannual U.S. State of Science and Engineering 2022 report, as required by the NSF Act. Broadly, the report presents a near-term view of S&E based mostly on 2019 data. To a large extent, this year’s edition echoes trends from the last few reports. The U.S. is still a world leader in R&D spending and S&E education... Read more…

Researchers Achieve 99 Percent Quantum Accuracy with Silicon-Embedded Qubits 

January 20, 2022

Researchers in Australia and the U.S. have made exciting headway in the quantum computing arms race. A multi-institutional team including the University of New South Wales and Sandia National Laboratory announced that th Read more…

Trio of Supercomputers Powers Estimate of Carbon in Earth’s Outer Core

January 20, 2022

Carbon is one of the essential building blocks of life on Earth, and it—along with hydrogen, nitrogen and oxygen—is one of the key elements researchers look for when they search for habitable planets and work to unde Read more…

AWS Solution Channel

shutterstock 718231072

Accelerating drug discovery with Amazon EC2 Spot Instances

This post was contributed by Cristian Măgherușan-Stanciu, Sr. Specialist Solution Architect, EC2 Spot, with contributions from Cristian Kniep, Sr. Developer Advocate for HPC and AWS Batch at AWS, Carlos Manzanedo Rueda, Principal Solutions Architect, EC2 Spot at AWS, Ludvig Nordstrom, Principal Solutions Architect at AWS, Vytautas Gapsys, project group leader at the Max Planck Institute for Biophysical Chemistry, and Carsten Kutzner, staff scientist at the Max Planck Institute for Biophysical Chemistry. Read more…

Multiverse Targets ‘Quantum Computing for the Masses’

January 19, 2022

The race to deliver quantum computing solutions that shield users from the underlying complexity of quantum computing is heating up quickly. One example is Multiverse Computing, a European company, which today launched the second financial services product in its Singularity product group. The new offering, Fair Price, “delivers a higher accuracy in fair price calculations for financial... Read more…

Meta’s Massive New AI Supercomputer Will Be ‘World’s Fastest’

January 24, 2022

Fresh off its rebrand last October, Meta (née Facebook) is putting muscle behind its vision of a metaversal future with a massive new AI supercomputer called t Read more…

Supercomputer Analysis Shows the Atmospheric Reach of the Tonga Eruption

January 21, 2022

On Saturday, an enormous eruption on the volcanic islands of Hunga Tonga and Hunga Haʻapai shook the Pacific Ocean. The explosion, which could be heard six tho Read more…

NSB Issues US State of Science and Engineering 2022 Report

January 20, 2022

This week the National Science Board released its biannual U.S. State of Science and Engineering 2022 report, as required by the NSF Act. Broadly, the report presents a near-term view of S&E based mostly on 2019 data. To a large extent, this year’s edition echoes trends from the last few reports. The U.S. is still a world leader in R&D spending and S&E education... Read more…

Multiverse Targets ‘Quantum Computing for the Masses’

January 19, 2022

The race to deliver quantum computing solutions that shield users from the underlying complexity of quantum computing is heating up quickly. One example is Multiverse Computing, a European company, which today launched the second financial services product in its Singularity product group. The new offering, Fair Price, “delivers a higher accuracy in fair price calculations for financial... Read more…

Students at SC21: Out in Front, Alongside and Behind the Scenes

January 19, 2022

The Supercomputing Conference (SC) is one of the biggest international conferences dedicated to high-performance computing, networking, storage and analysis. SC Read more…

Q-Ctrl – Tackling Quantum Hardware’s Noise Problems with Software

January 13, 2022

Implementing effective error mitigation and correction is a critical next step in advancing quantum computing. While a lot of attention has been given to effort Read more…

Nvidia Defends Arm Acquisition Deal: a ‘Once-in-a-Generation Opportunity’

January 13, 2022

GPU-maker Nvidia is continuing to try to keep its proposed acquisition of British chip IP vendor Arm Ltd. alive, despite continuing concerns from several governments around the world. In its latest action, Nvidia filed a 29-page response to the U.K. government to point out a list of potential benefits of the proposed $40 billion deal. Read more…

Nvidia Buys HPC Cluster Management Company Bright Computing

January 10, 2022

Graphics chip powerhouse Nvidia today announced that it has acquired HPC cluster management company Bright Computing for an undisclosed sum. Unlike Nvidia’s bid to purchase semiconductor IP company Arm, which has been stymied by regulatory challenges, the Bright deal is a straightforward acquisition that aims to expand... Read more…

IonQ Is First Quantum Startup to Go Public; Will It be First to Deliver Profits?

November 3, 2021

On October 1 of this year, IonQ became the first pure-play quantum computing start-up to go public. At this writing, the stock (NYSE: IONQ) was around $15 and its market capitalization was roughly $2.89 billion. Co-founder and chief scientist Chris Monroe says it was fun to have a few of the company’s roughly 100 employees travel to New York to ring the opening bell of the New York Stock... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

AMD Launches Milan-X CPU with 3D V-Cache and Multichip Instinct MI200 GPU

November 8, 2021

At a virtual event this morning, AMD CEO Lisa Su unveiled the company’s latest and much-anticipated server products: the new Milan-X CPU, which leverages AMD’s new 3D V-Cache technology; and its new Instinct MI200 GPU, which provides up to 220 compute units across two Infinity Fabric-connected dies, delivering an astounding 47.9 peak double-precision teraflops. “We're in a high-performance computing megacycle, driven by the growing need to deploy additional compute performance... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Nvidia Buys HPC Cluster Management Company Bright Computing

January 10, 2022

Graphics chip powerhouse Nvidia today announced that it has acquired HPC cluster management company Bright Computing for an undisclosed sum. Unlike Nvidia’s bid to purchase semiconductor IP company Arm, which has been stymied by regulatory challenges, the Bright deal is a straightforward acquisition that aims to expand... Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Three Chinese Exascale Systems Detailed at SC21: Two Operational and One Delayed

November 24, 2021

Details about two previously rumored Chinese exascale systems came to light during last week’s SC21 proceedings. Asked about these systems during the Top500 media briefing on Monday, Nov. 15, list author and co-founder Jack Dongarra indicated he was aware of some very impressive results, but withheld comment when asked directly if he had... Read more…

Leading Solution Providers

Contributors

Lessons from LLVM: An SC21 Fireside Chat with Chris Lattner

December 27, 2021

Today, the LLVM compiler infrastructure world is essentially inescapable in HPC. But back in the 2000 timeframe, LLVM (low level virtual machine) was just getting its start as a new way of thinking about how to overcome shortcomings in the Java Virtual Machine. At the time, Chris Lattner was a graduate student of... Read more…

2021 Gordon Bell Prize Goes to Exascale-Powered Quantum Supremacy Challenge

November 18, 2021

Today at the hybrid virtual/in-person SC21 conference, the organizers announced the winners of the 2021 ACM Gordon Bell Prize: a team of Chinese researchers leveraging the new exascale Sunway system to simulate quantum circuits. The Gordon Bell Prize, which comes with an award of $10,000 courtesy of HPC pioneer Gordon Bell, is awarded annually... Read more…

Nvidia Defends Arm Acquisition Deal: a ‘Once-in-a-Generation Opportunity’

January 13, 2022

GPU-maker Nvidia is continuing to try to keep its proposed acquisition of British chip IP vendor Arm Ltd. alive, despite continuing concerns from several governments around the world. In its latest action, Nvidia filed a 29-page response to the U.K. government to point out a list of potential benefits of the proposed $40 billion deal. Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Top500: No Exascale, Fugaku Still Reigns, Polaris Debuts at #12

November 15, 2021

No exascale for you* -- at least, not within the High-Performance Linpack (HPL) territory of the latest Top500 list, issued today from the 33rd annual Supercomputing Conference (SC21), held in-person in St. Louis, Mo., and virtually, from Nov. 14–19. "We were hoping to have the first exascale system on this list but that didn’t happen," said Top500 co-author... Read more…

TACC Unveils Lonestar6 Supercomputer

November 1, 2021

The Texas Advanced Computing Center (TACC) is unveiling its latest supercomputer: Lonestar6, a three peak petaflops Dell system aimed at supporting researchers Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire