Intel Labs Day – Quantum, Neuromorphic, Integrated Photonics, “Pursuit of 1000X” and More

By John Russell

December 8, 2020

It’s fascinating to see what a major company has percolating in the lab as it reflects the company’s mid- and longer-term expectations. At Intel Labs Day last week, the chip giant provided a glimpse into five priorities it is pursuing. The list won’t surprise you – quantum computing, neuromorphic computing, integrated photonics, machine programming (think machines programming machines), and what Intel calls confidential computing (think security).

Lab director Rich Uhlig was the master of ceremonies in what was a carefully-scripted and smoothly-run event in this new era of online conferences. While much of the material was familiar, there were deeper dives in all topics as well as a new product announcement in quantum (Horse Ridge 2 controller chip), impressive benchmarks in neuromorphic computing (v. CPU/GPUs), and a few noteworthy collaborators discussing various joint projects.

The unifying concept for the day was unprecedented data growth. There’s an expectation we’ll generate on the order of 175 zetabytes in 2025. As one zettabyte equals 1,000 exabytes, Intel themed its agenda “In pursuit of 1000X: Disruptive Research for the Next Decade in Computing.”

Said Uhlig in his opening remarks, “The first step is to set an ambitious goal with an understanding that we need multiple orders of magnitude improvement in along several vectors of technology spanning: interconnects; compute and memory; and in how we program in secure systems. As a shorthand, let’s call this our pursuit of 1000X”

Here are a few highlights from various presentations.

Integrated Photonics

The question has long been when, not if, optical components will be needed inside chips and servers to achieve greater bandwidth. James Jaussi, senior principal engineer in Intel’s photonics lab, said, “[Photonics] has come a long way, however, because of the current cost, [the] physical size of the silicon photonics modules, and operating power, optical IO has not pushed into the shorter distance interconnects and this is our next big hurdle.”

Intel’s vision is for integrated photonics to drive the cost and the footprint down, he said: “We strive to have the capability of scaling IO volumes from millions to billions, 1,000x increase. Future optical links will make all IO connections emanate directly from our server packages reaching fully across the datacenter.” Jaussi pointed out the following progress points:

  • Micro-ring modulators. Intel has miniaturized the modulator by a factor of more than 1,000, thereby eliminating a key barrier to integrating silicon photonics onto a compute package.
  • All-silicon photodetector. The industry has long believed silicon has virtually no light detection capability in the 1.3-1.6um wavelength range. Intel showcased research that proves otherwise with lower cost as a main benefit.
  • Integrated semiconductor optical amplifier. Targeting power reduction, it’s now possible to make integrated semiconductor optical amplifiers with the same material used for the integrated laser.
  • Integrated multi-wavelength lasers. Using wavelength division multiplexing (WDM), separate wavelengths can be used from the same laser to convey more data in the same beam of light.
  • Integration: Intel is the only company that has demonstrated integrated multi-wavelength lasers and semiconductor optical amplifiers, all-silicon photodetectors, and micro-ring modulators on a single technology platform tightly integrated with CMOS silicon.

“We feel these building blocks will help fundamentally change computer IO and revolutionize future datacenter communication,” said Jaussi, who also noted Intel’s disclosure last February of 3D stacked CMOS circuits interfacing directly with photonics by stacking two ICS, one on top of the other. “There is a clear inflection point between optical and electrical approaching,” he said.

Of course, many companies, new (Ayar Labs) and old (Nvidia) are feverishly tackling optical performance and packaging issues. The race is on.

Quantum Computing

Among the noisy quantum community, Intel had been largely quiet until the last year or so. It is focused on silicon-based spin qubit technology than can be fabbed using Intel’s existing CMOS manufacturing expertise.  Anne Matsuura, director of quantum architecture, and Jim Clarke, Intel director of quantum hardware and components group, shared presentation duties.

In many ways, Intel has stepped more cautiously into the quantum computing waters.

“We believe that commercial scale quantum computers will enable simulation of these materials so that in the future we can also design materials, chemicals and drugs with properties that we desire,” said Matsuura during the opening session, but quickly added, “Today’s 100 qubits or even thousands of qubits will not get us there. [We] will need a full stack, commercial-scale quantum computing system of millions of qubits to attain quantum practicality for this type of ambitious problem solving.”

Spin qubits promise many significant advantages (coherency time and scalable manufacturing among them) but present the same control drawbacks as all semiconductor-based qubits in being highly susceptible to noise interference. That means they must operate in near-zero degree (K) environments inside dilution refrigerators. To get the microwave control signals to the qubits requires cables to be inserted into those refrigerators. Stuffing a million coax cables into one of these refrigerators is a daunting, perhaps undoable task.

Intel is tackling that problem from a different direction with a integrated cryo-controller chip, Horse Ridge (coldest spot in Oregon), which can be placed inside the fridge close to the chip. It’s a significant change and a potential game-changer. In one of the few news items at Labs Day, Intel announced Horse Ridge 2.

New features enable:

  • Qubit readout. The function grants the ability to read the current qubit state. The readout is significant, as it allows for on-chip, low-latency qubit state detection without storing large amounts of data, thus saving memory and power.
  • Multigate pulsing.The ability to simultaneously control the potential of many qubit gates is fundamental for effective qubit readouts and the entanglement and operation of multiple qubits, paving the path toward a more scalable system.

Here’s Intel’s description:

“The addition of a programmable microcontroller operating within the integrated circuit enables Horse Ridge II to deliver higher levels of flexibility and sophisticated controls in how the three control functions are executed. The microcontroller uses digital signal processing techniques to perform additional filtering on pulses, helping to reduce crosstalk between qubits.

“Horse Ridge II is implemented using Intel 22nm low-power FinFET technology (22FFL) and its functionality has been verified at 4 kelvins. Today, a quantum computer operates in the millikelvin range – just a fraction of a degree above absolute zero. But silicon spin qubits – the underpinning of Intel’s quantum efforts – have properties that could allow them to operate at temperatures of 1 kelvin or higher, which would significantly reduce the challenges of refrigerating the quantum system.”

It will be interesting to see if Horse Ridge could be used by other quantum computing companies. Intel hasn’t said it wouldn’t sell the chip to others.

Matsuura said, “Scaling is in Intel’s DNA. It is inherent to how we approach technology innovation, and quantum is no different. There are key areas that Intel’s quantum research program is focused on: spin qubit technologies, cryogenic control technology, and full stack innovation. Each of these areas addresses critical challenges that lie on the path to scaling quantum, and Intel is tackling each systematically to achieve scaling.

“We are introducing high volume, high throughput capabilities for our spin qubits with a cryo-probe. This is a one of a kind piece of equipment that helps us test our chips on CMOS wafers in our fabs very rapidly. I mean, we’re talking hours instead of days with respect to time to information; we’re essentially mimicking the information turn cycle that we have in standard transistor research and development. With the cryo-probe, we can get test data and learnings from our research devices 1000x faster, significantly accelerating qubit develop.”

Neuromorphic Computing

If practical quantum computing still seems far off (and it does), neuromorphic computing seems much closer, even if only in a limited number of applications. Intel is an active player and its Loihi chip, Pohoiki Springs system, and Intel Neuromorphic Research Community (100-plus members) – all taken together – represent one of the biggest vendor footprints in neuromorphic computing.

Mike Davies, director of Intel’s Neuromorphic Lab, covered a great deal of ground. While no new neuromorphic products were announced, he reviewed the technology some detail and INRC (Intel Neuromorphic Research Community) member Accenture talked about three of its neuromorphic computing projects. He also spent a fair amount of time reviewing benchmark data versus both CPUs and Nvidia GPUs.

“Our focus has been on benchmarking Loihi’s performance against conventional architectures, so we can build confidence that neuromorphic chips in general can deliver on the promise. That said, over the past year, several other neuromorphic chips have been announced that sound to also be mature and optimized enough to give good results. That’s exciting because it means we can start comparing the strengths and weaknesses of different neuromorphic architectural and design choices. This kind of competitive benchmarking will accelerate progress in the field; we truly welcome healthy competition from other platforms,” said Davies.

By way of review, neuromorphic computing attempts to mimic how the brain’s neurons work. Roughly, this means using spiking neural networks (SNNs) to encode and accomplish computation instead of classic von Neumann processor-and-memory computing. The brain, of course is famous for working on about 20 watts.

Davies provided succinct summary:

“To date the INRC has generated over 40 peer reviewed publications many with quantified results confirming the promise of the technology to deliver meaningful gains. Several robotics workloads show 40 to 100 times lower power consumption on Loihi compared to conventional solutions. That includes an adaptive robotic arm application, a tactile sensing network the processes input from a new artificial skin technology, and a simultaneous localization and mapping workload or slam as it’s called.

“On our large scale Pohoiki Springs system we demonstrated ‘similarity search’ running with 45 times lower power and over 100 times faster than a CPU implementation. Loihi [can] also solve hard optimization problems such as constraint satisfaction, and graph search over 100 times faster than a CPU with over 1,000 times lower energy. This means that future neuromorphic devices like drones could solve planning and navigation problems continuously in real time.

“All of this progress and results give us a lot of confidence that neuromorphic computing, in time, will enable groundbreaking capabilities over a wide range of applications. In the near-term, the cost profile of the technology will limit applications to either the small scale such as an edge devices and sensors, or to less cost-sensitive applications like satellites and specialized robots. Over time, we expect innovations in memory technologies to drive down the cost allowing neuromorphic solutions to reach an expanding set of intelligent devices that need to process real time data where size, weight and power are all constraints.”

Alex Kass of Accenture, an INRC member, presented three projects involving voice command recognition, full body gesture classification, and adaptive control for mobile robots. “We focused on problems where edge AI is needed to complement cloud based capabilities. We look for problems that are difficult to solve with the CPUs or GPUs that are common today, and we most prefer to focus on capabilities that can be applied across many business contexts,” he said. One use case is in automotive.

Currently, AI hardware is too power hungry, which can impact vehicle performance and limit the possible applications, said Tim Shea, researcher with Accenture Labs. Smart vehicles need more efficient edge AI devices to meet the demand. Using edge AI devices to compliment cloud-based AI could also increase responsiveness and improve reliability when connectivity is poor.

Shea said, “We’ve built a proof of concept system with one of our major automotive partners to demonstrate that neuromorphic computing can make cars smarter without draining the batteries. We’re using Intel’s Kapoho Bay (version of Loihi chip) to recognize voice commands that an owner would give to their vehicle. The Kapoho Bay is a portable and extremely efficient neuromorphic research device for AI at the edge. We’re comparing that proof of concept system against a standard approach using a GPU.”

In developing the POC system, Accenture trained spiking neural networks to differentiate between command phrases and then ran the trained networks on the Kapoho Bay. “We connected the Kapoho Bay to a microphone, and a controller similar to the electronic control units that operate various functions of a smart vehicle. We’re targeting commands that reflect features that can be accessed from outside of the smart vehicle, such as “park here,” or “unlock passenger door,” said Shea. “These functions also need to be energy efficient, so the vehicle can remain responsive even when parked for long stretches of time.”

The first step, according to Shea, was getting the system to recognize simple commands such as “lights on,” “start engine,” etc. “Using a combination of open source voice recordings and a smaller sample of specific commands, we can approximate the kinds of voice processing needed for smart vehicles. We tested this approach by comparing our trained spiking neural networks running on Intel’s neuromorphic research cloud against a convolutional neural network running on a GPU.”

Both systems achieved acceptable accuracy recognizing our voice commands. “But we found that the neuromorphic system was up to one thousand times more efficient than the standard AI system with a GPU. This is extremely impressive, and it’s consistent with the results from other labs,” said Shea.

The dramatic improvement in energy efficiency, said Shea, derives from the fact that computation on the Loihi is extremely sparse. “While the GPU performs billions of computations per second, every second, the neuromorphic chip only processes changes in the audio signal, and neuron cores inside Loihi communicate efficiently with spikes,” he said.

Davies presented a fair amount of detail in a break-out discussion that is best watched directly.

Confidential Computing

Efforts to maintain data security and confidentiality are hardly new. Intel presented its ongoing efforts in that arena, which involves big bets on federated learning, homomorphic encryption, and recently the launch of the Private AI Collaborative Research Institute “to advance and develop technologies in privacy and trust for decentralized artificial intelligence.”

“Today, encryption is used as a solution to protect data while it’s being sent across the network and while it’s stored, but data can still be vulnerable when it’s being used. Confidential computing allows data to be protected while in use,” said Jason Martin, principal engineer in the Security Solutions Lab and manager of the Secure Intelligence Team.

“Trusted execution environments provide a mechanism to perform confidential computing. They’re designed to minimize the set of hardware and software you need to trust to keep your data secure. To reduce the software that you must rely on, you need to ensure that other applications or even the operating system can’t compromise your data. Even if malware is present. Think of it as a safe that protects your valuables even from an intruder in the building,” he said.

Federated learning is one approach to maintaining security.

“In many industries such as retail, manufacturing, healthcare and financial services, the largest data sets are locked up in what are called data silos. These data silos may exist to address privacy concerns or regulatory challenges, or in some cases that data is just too large to move. However, these data silos create obstacles when using machine learning tools to gain valuable insights from the data. Medical imaging is an example where machine learning has made advances in identifying key patterns in MRIs such as the location of brain tumors, but is inhibited by these concerns. Intel labs has been collaborating with the Center for Biomedical Image Computing and Analytics at the University of Pennsylvania Perelman School of Medicine on federated learning,” said Martin.

With federated learning, the computation is split such that each hospital trains the local version of the algorithm on their data at the hospital, and then sends what they learned to a central aggregator. This combines the models from each hospital into a single model without sharing the data. A study by UPenn and Intel showed federated learning “could train a deep learning model to within 99% of the accuracy of the same model trained with the traditional non-private method. We also showed that institutions did on average 17% better when trained in the Federation, compared to training with only their own data,” said Martin.

Homomorphic encryption is a new cryptosystem that allows applications to perform computation directly on encrypted data without exposing the data itself. The technology is emerging as a leading method to protect privacy of data when delegating computation. For example, these cryptographic techniques allow cloud computation directly on encrypted data without the need for trusting the cloud infrastructure, cloud service or other tenants.

“It turns out in fully homomorphic encryption, you can perform those basic operations on encrypted data using any algorithm of arbitrary complexity. And then when you decrypt the data, those operations are applied to the plaintext,” said Martin.

The challenge with homomorphic encryption is dataset size. “However, there are challenges that hinder the adoption of fully homomorphic encryption. In traditional encryption mechanisms to transfer and store data, the overhead is relatively negligible. But with fully homomorphic encryption, the size of homomorphic ciphertext is significantly larger than plain data, in some cases 1,000 to 10,000 times larger,” he said.

Machine Programming

Programs creating programs is a much-discussed topic in HPC and IT generally. Software development is hard, detailed work, and seldom done perfectly on the first pass. According to one study, programmers in the U.S. spend 50 percent of their time debugging.

“Think about machine programming helping us in two simultaneous directions,” said Justin Gottshlich, principal engineer and lead for Intel’s machine programming research group. “First, we want the machine programming systems to help coders and non-coders become more productive. Second, we want to ensure that the machine programming systems that do this are producing high quality code that’s fast, secure.”

At Labs Day, Intel unveiled ControlFlag – a machine programming research system that can autonomously detect errors in code. In preliminary tests, ControlFlag trained and learned novel defects on over 1 billion unlabeled lines of production-quality code.

“Let me describe two concrete systems that our machine programming team has developed and is working to integrate into production quality systems, just as a reference. We’ve built over a dozen of these systems now, but in the interest of time, we’ll just talk about these two. The first is a machine programing system that can automatically detect performance bugs. This system rich actually invents the test to detect the performance issues. [H]istorically, these tests have been created by humans. With our system, the human doesn’t write a single line of code. On top of that, the same system can then automatically adapt those invented tests to different hardware architectures,” said Gottshlich.

“The second system that we’ve built also attempts to find bugs. But this system isn’t restricted to just performance bugs; it can find a variety of bugs. What’s so exciting is that unlike the prior solutions of finding bugs, the machine programming system that we’ve built, and we literally just built this a few months ago, learns to identify bugs without any human supervision. That means it learns without any human generated labels of data. Instead, what we do is we send this system out into the world to learn about code. When it comes back, it has learned a number of amazing things, we then point it at a code repository, even code that is production quality and has been around for decades.”

For fuller peak into Intel Labs Day: https://newsroom.intel.com/press-kits/intel-labs-day-2020/#gs.mzg9zq

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Rolls Out Certified Server Program Targeting AI Applications

January 26, 2021

Nvidia today launched a certified systems program in which participating vendors can offer Nvidia-certified servers with up to eight A100 GPUs. Separate support contracts directly from Nvidia for the certified systems ar Read more…

By John Russell

XSEDE Supercomputers Square Off Against Ebola

January 26, 2021

COVID-19 may have dominated headlines and occupied much of the world’s scientific computing capacity over the last year, but many researchers continued their work to keep other deadly viruses at bay. One of those, Ebol Read more…

By Oliver Peckham

What’s New in HPC Research: Galaxies, Fugaku, Electron Microscopes & More

January 25, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Red Hat’s Disruption of CentOS Unleashes Storm of Dissent

January 22, 2021

Five weeks after angering much of the CentOS Linux developer community by unveiling controversial changes to the no-cost CentOS operating system, Red Hat has unveiled alternatives for affected users that give them severa Read more…

By Todd R. Weiss

China Unveils First 7nm Chip: Big Island

January 22, 2021

Shanghai Tianshu Zhaoxin Semiconductor Co. is claiming China’s first 7-nanometer chip, described as a leading-edge, general-purpose cloud computing chip based on a proprietary GPU architecture. Dubbed “Big Island Read more…

By George Leopold

AWS Solution Channel

Fire Dynamics Simulation CFD workflow on AWS

Modeling fires is key for many industries, from the design of new buildings, defining evacuation procedures for trains, planes and ships, and even the spread of wildfires. Read more…

HiPEAC Keynote: In-Memory Computing Steps Closer to Practical Reality

January 21, 2021

Pursuit of in-memory computing has long been an active area with recent progress showing promise. Just how in-memory computing works, how close it is to practical application, and what are some of the key opportunities a Read more…

By John Russell

Nvidia Rolls Out Certified Server Program Targeting AI Applications

January 26, 2021

Nvidia today launched a certified systems program in which participating vendors can offer Nvidia-certified servers with up to eight A100 GPUs. Separate support Read more…

By John Russell

Red Hat’s Disruption of CentOS Unleashes Storm of Dissent

January 22, 2021

Five weeks after angering much of the CentOS Linux developer community by unveiling controversial changes to the no-cost CentOS operating system, Red Hat has un Read more…

By Todd R. Weiss

HiPEAC Keynote: In-Memory Computing Steps Closer to Practical Reality

January 21, 2021

Pursuit of in-memory computing has long been an active area with recent progress showing promise. Just how in-memory computing works, how close it is to practic Read more…

By John Russell

HiPEAC’s Vision for a New Cyber Era, a ‘Continuum of Computing’

January 21, 2021

Earlier this week (Jan. 19), HiPEAC — the European Network on High Performance and Embedded Architecture and Compilation — published the 8th edition of the HiPEAC Vision, detailing an increasingly interconnected computing landscape where complex tasks are carried out across multiple... Read more…

By Tiffany Trader

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

By Oliver Peckham

President-elect Biden Taps Eric Lander and Deep Team on Science Policy

January 19, 2021

Last Friday U.S. President-elect Joe Biden named The Broad Institute founding director and president Eric Lander as his science advisor and as director of the Office of Science and Technology Policy. Lander, 63, is a mathematician by training and distinguished life sciences... Read more…

By John Russell

Pat Gelsinger Returns to Intel as CEO

January 14, 2021

The Intel board of directors has appointed a new CEO. Intel alum Pat Gelsinger is leaving his post as CEO of VMware to rejoin the company that he parted ways with 11 years ago. Gelsinger will succeed Bob Swan, who will remain CEO until Feb. 15. Gelsinger previously spent 30 years... Read more…

By Tiffany Trader

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Esperanto Unveils ML Chip with Nearly 1,100 RISC-V Cores

December 8, 2020

At the RISC-V Summit today, Art Swift, CEO of Esperanto Technologies, announced a new, RISC-V based chip aimed at machine learning and containing nearly 1,100 low-power cores based on the open-source RISC-V architecture. Esperanto Technologies, headquartered in... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

Leading Solution Providers

Contributors

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Intel Xe-HP GPU Deployed for Aurora Exascale Development

November 17, 2020

At SC20, Intel announced that it is making its Xe-HP high performance discrete GPUs available to early access developers. Notably, the new chips have been deplo Read more…

By Tiffany Trader

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chipmaker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the European Union begin scrutinizing the impact of the blockbuster deal on semiconductor industry competition and innovation, the deal has at the very least... Read more…

By George Leopold

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This