Chipmakers Looking at New Architecture to Drive Computing Ahead

By Agam Shah

November 23, 2022

The ability to scale current computing designs is reaching a breaking point, and chipmakers such as Intel, Qualcomm and AMD are putting their brains together on an alternate architecture to push computing forward.

The chipmakers are coalescing around a sparse computational approach, which involves bringing computing to data instead of vice versa, which is what current computing is built around.

The concept is still far out, but a new design is needed as the current computing model used to scale the world’s fastest supercomputers is unsustainable in the long run, said William Harrod, a program manager at the Intelligence Advanced Research Projects Activity (IARPA), during a keynote at the SC22 conference last week.

The current model is inefficient as it cannot keep up with the proliferation of data. Users need to wait for hours to receive the results of data sent to computing hubs with accelerators and other resources. The new approach will shorten the distance that data travels, process information more efficiently and intelligently, and generate results faster, Harrod said during the keynote.

“There needs to be an open discussion because we’re transitioning from a world of dense computation… into a world of sparse computation. It is a big transition, and companies are not going to move forward with changing designs until we can verify and validate these ideas,” Harrod said.

One of the goals behind the sparse computing* approach is to generate results in close to real-time or in short time, and see the results as the data is changing, said Harrod, who previously ran research programs at the Department of Energy that ultimately led to the development of exascale systems.

The current computing architecture pushes all data and computing problems – big and small – over networks into a web of processors, accelerators and memory substructures. There are more efficient ways to solve problems, Harrod said.

The intent of a sparse computing system is to solve the data-movement problem. Current network designs and interfaces could bog down computing by making data move over long distances. Sparse computing cuts the distance that data travels, processing it smartly on the nearest chips, and placing equal emphasis on software and hardware.

“I don’t see the future as relying on just getting a better accelerator, because getting a better accelerator won’t solve the data movement problem. In fact, most likely, the accelerator is going to be some sort of standard interface to the rest of the system that is not designed at all for this problem,” Harrod said.

Harrod learned a lot from designing exascale systems. One takeaway was that scaling up computing speed under the current computing architecture – which is modeled around on the von Neumann architecture – wouldn’t be feasible in the long run.

Another conclusion was that energy costs of moving data over long distances amounted to wastage. The Department of Energy’s original goal was to create an exascale system in the 2015-2016 timeframe running at 20 megawatts, but it took a lot longer. The world’s first exascale system, Frontier, which cracked the Top500 list earlier this year, draws 21 megawatts.

“We have incredibly sparse data sets, and the operations that are performed on the datasets are very few. So you do a lot of movement of data, but you don’t get a lot of operations out of it. What you really want to do is efficiently move the data,” Harrod said.

Not every computing problem is equal, and sticking small and big problems on GPUs is not always the answer, Harrod said. In a dense computing model, moving smaller problems into high-performance accelerators is inefficient.

IARPA’s computing initiative, called AGILE (short for Advanced Graphical Intelligence Logical Computing Environment), is designed to “define the future of computing based on the data movement problem, not on floating point units of ALUs,” Harrod said.

Computation relies typically on generating results from unstructured data distributed over a wide network of sources. The sparse computing model involves breaking up the dense model into a more distributed and asynchronous computing system where computing comes to data where it is needed. The assumption is that localized computation does a better job and reduces the data travel time.

The software weighs equally, with a focus on applications like graph analytics, where the strength between data connections is continuously analyzed.  The sparse computing model also applies to machine learning, statistical methods, linear algebra and data filtering.

IARPA signed six contracts with organizations that include AMD, Georgia Tech, Indiana University, Intel Federal LLC, Qualcomm, University of Chicago on the best approach to developing the non-von Neumann computing model.

“There’s going to be an open discussion of the ideas that are being funded,” Harrod said.

The proposals suggest technological approaches such as the development of data-driven compute elements, and some of those technologies are already there, like CPUs with HBM memory and memory modules on substrates, Harrod said, adding “it doesn’t solve all the problems we have here, but it is a step in that direction.”

The second technological approach involves intelligent mechanisms to move data. “It’s not just a question of a floating point sitting there doing load storage – that’s not an intelligent mechanism for moving data around,” Harrod said.

Most importantly there needs to be a focus on the runtime system as an orchestrator of the sparse computing system.

“The assumption here is that these systems are doing something all the time. You really need to have something that is looking to see what is happening. You don’t want to have to be a programmer who takes total control of all this – then we’re all in serious trouble,” Harrod said.

The runtime will be important in creating the real-time nature of the computing environment.

“We want to be in a predictive environment versus a forensic environment,” Harrod said.

The proposals will need to be verified and validated via tools like FireSim, which measures the performance of novel architectures, Harrod said.


Approaches of the six partners (aka Performers in IARPA-speak):






* Sparse computing here is distinct from the established concept of “sparsity” in HPC and AI, in which a matrix structure is sparse if it contains mostly zeros.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel’s Server Chips Are ‘Lead Vehicles’ for Manufacturing Strategy

March 30, 2023

…But chipmaker still does not have an integrated product strategy, which puts the company behind AMD and Nvidia. Intel finally has a full complement of server and PC chips it will release in the coming years, which will determine whether it has regained its leadership in chip manufacturing. The chipmaker this week... Read more…

JPMorgan Chase, QC Ware Report Progress in Quantum DL for Deep Hedging

March 30, 2023

Hedging is, of course, a ubiquitous practice in FS and there are well-developed classical computational approaches for implementing this risk mitigation strategy. The challenge has been the computational cost and time-to Read more…

Destination Earth Takes Form as EuroHPC’s Flagship Workload

March 30, 2023

When the EuroHPC Summit was held last week in Gothenburg, there was a distinct shift in tone for the maturing supercomputing play. With LUMI and Leonardo – plus four other petascale systems – already operational, the Read more…

What’s Stirring in Nvidia’s R&D Lab? Chief Scientist Bill Dally Provides a Peek

March 28, 2023

In what’s become a regular GPU Technology Conference feature, Bill Dally, Nvidia chief scientist and SVP of research, provides a glimpse into how Nvidia organizes R&D and what are a few of its top priorities. Perha Read more…

Cost-effective Fork of GPT-3 Released to Scientists

March 28, 2023

Researchers looking to create a foundation for a ChatGPT-style application now have an affordable way to do so. Cerebras is releasing open-source learning models for researchers with the ingredients necessary to cook up their own ChatGPT-AI applications. The open-source tools include seven models that form a learning... Read more…

AWS Solution Channel

Shutterstock 531739477

Checkpointing HPC applications using the Spot Instance two-minute notification from Amazon EC2

Amazon Elastic Compute Cloud (Amazon EC2) offers a wide-range of compute instances at different price points, all designed to match different customer’s needs. You can further optimize cost by choosing Reserved Instances (RIs) and even Spot Instances. Read more…

 

Get the latest on AI innovation at NVIDIA GTC

Join Microsoft at NVIDIA GTC, a free online global technology conference, March 20 – 23 to learn how organizations of any size can power AI innovation with purpose-built cloud infrastructure from Microsoft. Read more…

Pegasus ‘Big Memory’ Supercomputer Now Deployed at the University of Tsukuba

March 25, 2023

In the bevy of news from Nvidia's GPU Technology Conference this week, another new system has come to light: Pegasus, which entered operations at the University of Tsukuba’s Center for Computational Sciences in January Read more…

Intel’s Server Chips Are ‘Lead Vehicles’ for Manufacturing Strategy

March 30, 2023

…But chipmaker still does not have an integrated product strategy, which puts the company behind AMD and Nvidia. Intel finally has a full complement of server and PC chips it will release in the coming years, which will determine whether it has regained its leadership in chip manufacturing. The chipmaker this week... Read more…

Destination Earth Takes Form as EuroHPC’s Flagship Workload

March 30, 2023

When the EuroHPC Summit was held last week in Gothenburg, there was a distinct shift in tone for the maturing supercomputing play. With LUMI and Leonardo – pl Read more…

What’s Stirring in Nvidia’s R&D Lab? Chief Scientist Bill Dally Provides a Peek

March 28, 2023

In what’s become a regular GPU Technology Conference feature, Bill Dally, Nvidia chief scientist and SVP of research, provides a glimpse into how Nvidia organ Read more…

Cost-effective Fork of GPT-3 Released to Scientists

March 28, 2023

Researchers looking to create a foundation for a ChatGPT-style application now have an affordable way to do so. Cerebras is releasing open-source learning models for researchers with the ingredients necessary to cook up their own ChatGPT-AI applications. The open-source tools include seven models that form a learning... Read more…

Pegasus ‘Big Memory’ Supercomputer Now Deployed at the University of Tsukuba

March 25, 2023

In the bevy of news from Nvidia's GPU Technology Conference this week, another new system has come to light: Pegasus, which entered operations at the University Read more…

EuroHPC Summit: Tackling Exascale, Energy, Industry & Sovereignty

March 24, 2023

As the 2023 EuroHPC Summit opened in Gothenburg on Monday, Herbert Zeisel – chair of EuroHPC’s Governing Board – commented that the undertaking had “lef Read more…

Nvidia Doubling Down on China Market in the Face of Tightened US Export Controls

March 23, 2023

Chipmakers are tightlipped on China activities following a U.S. crackdown on hardware exports to the country. But Nvidia remains unfazed, and is doubling down o Read more…

Nvidia Announces BlueField-3 GA, Oracle Cloud Is Early User

March 21, 2023

Nvidia today announced general availability for its BlueField-3 data processing unit (DPU) along with impressive early deployments including Oracle Cloud Infras Read more…

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

Leading Solution Providers

Contributors

CORNELL I-WAY DEMONSTRATION PITS PARASITE AGAINST VICTIM

October 6, 1995

Ithaca, NY --Visitors to this year's Supercomputing '95 (SC'95) conference will witness a life-and-death struggle between parasite and victim, using virtual Read more…

SGI POWERS VIRTUAL OPERATING ROOM USED IN SURGEON TRAINING

October 6, 1995

Surgery simulations to date have largely been created through the development of dedicated applications requiring considerable programming and computer graphi Read more…

U.S. Will Relax Export Restrictions on Supercomputers

October 6, 1995

New York, NY -- U.S. President Bill Clinton has announced that he will definitely relax restrictions on exports of high-performance computers, giving a boost Read more…

Dutch HPC Center Will Have 20 GFlop, 76-Node SP2 Online by 1996

October 6, 1995

Amsterdam, the Netherlands -- SARA, (Stichting Academisch Rekencentrum Amsterdam), Academic Computing Services of Amsterdam recently announced that it has pur Read more…

Cray Delivers J916 Compact Supercomputer to Solvay Chemical

October 6, 1995

Eagan, Minn. -- Cray Research Inc. has delivered a Cray J916 low-cost compact supercomputer and Cray's UniChem client/server computational chemistry software Read more…

NEC Laboratory Reviews First Year of Cooperative Projects

October 6, 1995

Sankt Augustin, Germany -- NEC C&C (Computers and Communication) Research Laboratory at the GMD Technopark has wrapped up its first year of operation. Read more…

Sun and Sybase Say SQL Server 11 Benchmarks at 4544.60 tpmC

October 6, 1995

Mountain View, Calif. -- Sun Microsystems, Inc. and Sybase, Inc. recently announced the first benchmark results for SQL Server 11. The result represents a n Read more…

New Study Says Parallel Processing Market Will Reach $14B in 1999

October 6, 1995

Mountain View, Calif. -- A study by the Palo Alto Management Group (PAMG) indicates the market for parallel processing systems will increase at more than 4 Read more…

SC22 Booth Videos

AMD @ SC22
Altair @ SC22
AWS @ SC22
Ayar Labs @ SC22
CoolIT @ SC22
Cornelis Networks @ SC22
DDN @ SC22
Dell Technologies @ SC22
HPE @ SC22
Intel @ SC22
Intelligent Light @ SC22
Lancium @ SC22
Lenovo @ SC22
Microsoft and NVIDIA @ SC22
One Stop Systems @ SC22
Penguin Solutions @ SC22
QCT @ SC22
Supermicro @ SC22
Tuxera @ SC22
Tyan Computer @ SC22
  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire