Intel Labs Day – Quantum, Neuromorphic, Integrated Photonics, “Pursuit of 1000X” and More

By John Russell

December 8, 2020

It’s fascinating to see what a major company has percolating in the lab as it reflects the company’s mid- and longer-term expectations. At Intel Labs Day last week, the chip giant provided a glimpse into five priorities it is pursuing. The list won’t surprise you – quantum computing, neuromorphic computing, integrated photonics, machine programming (think machines programming machines), and what Intel calls confidential computing (think security).

Lab director Rich Uhlig was the master of ceremonies in what was a carefully-scripted and smoothly-run event in this new era of online conferences. While much of the material was familiar, there were deeper dives in all topics as well as a new product announcement in quantum (Horse Ridge 2 controller chip), impressive benchmarks in neuromorphic computing (v. CPU/GPUs), and a few noteworthy collaborators discussing various joint projects.

The unifying concept for the day was unprecedented data growth. There’s an expectation we’ll generate on the order of 175 zetabytes in 2025. As one zettabyte equals 1,000 exabytes, Intel themed its agenda “In pursuit of 1000X: Disruptive Research for the Next Decade in Computing.”

Said Uhlig in his opening remarks, “The first step is to set an ambitious goal with an understanding that we need multiple orders of magnitude improvement in along several vectors of technology spanning: interconnects; compute and memory; and in how we program in secure systems. As a shorthand, let’s call this our pursuit of 1000X”

Here are a few highlights from various presentations.

Integrated Photonics

The question has long been when, not if, optical components will be needed inside chips and servers to achieve greater bandwidth. James Jaussi, senior principal engineer in Intel’s photonics lab, said, “[Photonics] has come a long way, however, because of the current cost, [the] physical size of the silicon photonics modules, and operating power, optical IO has not pushed into the shorter distance interconnects and this is our next big hurdle.”

Intel’s vision is for integrated photonics to drive the cost and the footprint down, he said: “We strive to have the capability of scaling IO volumes from millions to billions, 1,000x increase. Future optical links will make all IO connections emanate directly from our server packages reaching fully across the datacenter.” Jaussi pointed out the following progress points:

  • Micro-ring modulators. Intel has miniaturized the modulator by a factor of more than 1,000, thereby eliminating a key barrier to integrating silicon photonics onto a compute package.
  • All-silicon photodetector. The industry has long believed silicon has virtually no light detection capability in the 1.3-1.6um wavelength range. Intel showcased research that proves otherwise with lower cost as a main benefit.
  • Integrated semiconductor optical amplifier. Targeting power reduction, it’s now possible to make integrated semiconductor optical amplifiers with the same material used for the integrated laser.
  • Integrated multi-wavelength lasers. Using wavelength division multiplexing (WDM), separate wavelengths can be used from the same laser to convey more data in the same beam of light.
  • Integration: Intel is the only company that has demonstrated integrated multi-wavelength lasers and semiconductor optical amplifiers, all-silicon photodetectors, and micro-ring modulators on a single technology platform tightly integrated with CMOS silicon.

“We feel these building blocks will help fundamentally change computer IO and revolutionize future datacenter communication,” said Jaussi, who also noted Intel’s disclosure last February of 3D stacked CMOS circuits interfacing directly with photonics by stacking two ICS, one on top of the other. “There is a clear inflection point between optical and electrical approaching,” he said.

Of course, many companies, new (Ayar Labs) and old (Nvidia) are feverishly tackling optical performance and packaging issues. The race is on.

Quantum Computing

Among the noisy quantum community, Intel had been largely quiet until the last year or so. It is focused on silicon-based spin qubit technology than can be fabbed using Intel’s existing CMOS manufacturing expertise.  Anne Matsuura, director of quantum architecture, and Jim Clarke, Intel director of quantum hardware and components group, shared presentation duties.

In many ways, Intel has stepped more cautiously into the quantum computing waters.

“We believe that commercial scale quantum computers will enable simulation of these materials so that in the future we can also design materials, chemicals and drugs with properties that we desire,” said Matsuura during the opening session, but quickly added, “Today’s 100 qubits or even thousands of qubits will not get us there. [We] will need a full stack, commercial-scale quantum computing system of millions of qubits to attain quantum practicality for this type of ambitious problem solving.”

Spin qubits promise many significant advantages (coherency time and scalable manufacturing among them) but present the same control drawbacks as all semiconductor-based qubits in being highly susceptible to noise interference. That means they must operate in near-zero degree (K) environments inside dilution refrigerators. To get the microwave control signals to the qubits requires cables to be inserted into those refrigerators. Stuffing a million coax cables into one of these refrigerators is a daunting, perhaps undoable task.

Intel is tackling that problem from a different direction with a integrated cryo-controller chip, Horse Ridge (coldest spot in Oregon), which can be placed inside the fridge close to the chip. It’s a significant change and a potential game-changer. In one of the few news items at Labs Day, Intel announced Horse Ridge 2.

New features enable:

  • Qubit readout. The function grants the ability to read the current qubit state. The readout is significant, as it allows for on-chip, low-latency qubit state detection without storing large amounts of data, thus saving memory and power.
  • Multigate pulsing.The ability to simultaneously control the potential of many qubit gates is fundamental for effective qubit readouts and the entanglement and operation of multiple qubits, paving the path toward a more scalable system.

Here’s Intel’s description:

“The addition of a programmable microcontroller operating within the integrated circuit enables Horse Ridge II to deliver higher levels of flexibility and sophisticated controls in how the three control functions are executed. The microcontroller uses digital signal processing techniques to perform additional filtering on pulses, helping to reduce crosstalk between qubits.

“Horse Ridge II is implemented using Intel 22nm low-power FinFET technology (22FFL) and its functionality has been verified at 4 kelvins. Today, a quantum computer operates in the millikelvin range – just a fraction of a degree above absolute zero. But silicon spin qubits – the underpinning of Intel’s quantum efforts – have properties that could allow them to operate at temperatures of 1 kelvin or higher, which would significantly reduce the challenges of refrigerating the quantum system.”

It will be interesting to see if Horse Ridge could be used by other quantum computing companies. Intel hasn’t said it wouldn’t sell the chip to others.

Matsuura said, “Scaling is in Intel’s DNA. It is inherent to how we approach technology innovation, and quantum is no different. There are key areas that Intel’s quantum research program is focused on: spin qubit technologies, cryogenic control technology, and full stack innovation. Each of these areas addresses critical challenges that lie on the path to scaling quantum, and Intel is tackling each systematically to achieve scaling.

“We are introducing high volume, high throughput capabilities for our spin qubits with a cryo-probe. This is a one of a kind piece of equipment that helps us test our chips on CMOS wafers in our fabs very rapidly. I mean, we’re talking hours instead of days with respect to time to information; we’re essentially mimicking the information turn cycle that we have in standard transistor research and development. With the cryo-probe, we can get test data and learnings from our research devices 1000x faster, significantly accelerating qubit develop.”

Neuromorphic Computing

If practical quantum computing still seems far off (and it does), neuromorphic computing seems much closer, even if only in a limited number of applications. Intel is an active player and its Loihi chip, Pohoiki Springs system, and Intel Neuromorphic Research Community (100-plus members) – all taken together – represent one of the biggest vendor footprints in neuromorphic computing.

Mike Davies, director of Intel’s Neuromorphic Lab, covered a great deal of ground. While no new neuromorphic products were announced, he reviewed the technology some detail and INRC (Intel Neuromorphic Research Community) member Accenture talked about three of its neuromorphic computing projects. He also spent a fair amount of time reviewing benchmark data versus both CPUs and Nvidia GPUs.

“Our focus has been on benchmarking Loihi’s performance against conventional architectures, so we can build confidence that neuromorphic chips in general can deliver on the promise. That said, over the past year, several other neuromorphic chips have been announced that sound to also be mature and optimized enough to give good results. That’s exciting because it means we can start comparing the strengths and weaknesses of different neuromorphic architectural and design choices. This kind of competitive benchmarking will accelerate progress in the field; we truly welcome healthy competition from other platforms,” said Davies.

By way of review, neuromorphic computing attempts to mimic how the brain’s neurons work. Roughly, this means using spiking neural networks (SNNs) to encode and accomplish computation instead of classic von Neumann processor-and-memory computing. The brain, of course is famous for working on about 20 watts.

Davies provided succinct summary:

“To date the INRC has generated over 40 peer reviewed publications many with quantified results confirming the promise of the technology to deliver meaningful gains. Several robotics workloads show 40 to 100 times lower power consumption on Loihi compared to conventional solutions. That includes an adaptive robotic arm application, a tactile sensing network the processes input from a new artificial skin technology, and a simultaneous localization and mapping workload or slam as it’s called.

“On our large scale Pohoiki Springs system we demonstrated ‘similarity search’ running with 45 times lower power and over 100 times faster than a CPU implementation. Loihi [can] also solve hard optimization problems such as constraint satisfaction, and graph search over 100 times faster than a CPU with over 1,000 times lower energy. This means that future neuromorphic devices like drones could solve planning and navigation problems continuously in real time.

“All of this progress and results give us a lot of confidence that neuromorphic computing, in time, will enable groundbreaking capabilities over a wide range of applications. In the near-term, the cost profile of the technology will limit applications to either the small scale such as an edge devices and sensors, or to less cost-sensitive applications like satellites and specialized robots. Over time, we expect innovations in memory technologies to drive down the cost allowing neuromorphic solutions to reach an expanding set of intelligent devices that need to process real time data where size, weight and power are all constraints.”

Alex Kass of Accenture, an INRC member, presented three projects involving voice command recognition, full body gesture classification, and adaptive control for mobile robots. “We focused on problems where edge AI is needed to complement cloud based capabilities. We look for problems that are difficult to solve with the CPUs or GPUs that are common today, and we most prefer to focus on capabilities that can be applied across many business contexts,” he said. One use case is in automotive.

Currently, AI hardware is too power hungry, which can impact vehicle performance and limit the possible applications, said Tim Shea, researcher with Accenture Labs. Smart vehicles need more efficient edge AI devices to meet the demand. Using edge AI devices to compliment cloud-based AI could also increase responsiveness and improve reliability when connectivity is poor.

Shea said, “We’ve built a proof of concept system with one of our major automotive partners to demonstrate that neuromorphic computing can make cars smarter without draining the batteries. We’re using Intel’s Kapoho Bay (version of Loihi chip) to recognize voice commands that an owner would give to their vehicle. The Kapoho Bay is a portable and extremely efficient neuromorphic research device for AI at the edge. We’re comparing that proof of concept system against a standard approach using a GPU.”

In developing the POC system, Accenture trained spiking neural networks to differentiate between command phrases and then ran the trained networks on the Kapoho Bay. “We connected the Kapoho Bay to a microphone, and a controller similar to the electronic control units that operate various functions of a smart vehicle. We’re targeting commands that reflect features that can be accessed from outside of the smart vehicle, such as “park here,” or “unlock passenger door,” said Shea. “These functions also need to be energy efficient, so the vehicle can remain responsive even when parked for long stretches of time.”

The first step, according to Shea, was getting the system to recognize simple commands such as “lights on,” “start engine,” etc. “Using a combination of open source voice recordings and a smaller sample of specific commands, we can approximate the kinds of voice processing needed for smart vehicles. We tested this approach by comparing our trained spiking neural networks running on Intel’s neuromorphic research cloud against a convolutional neural network running on a GPU.”

Both systems achieved acceptable accuracy recognizing our voice commands. “But we found that the neuromorphic system was up to one thousand times more efficient than the standard AI system with a GPU. This is extremely impressive, and it’s consistent with the results from other labs,” said Shea.

The dramatic improvement in energy efficiency, said Shea, derives from the fact that computation on the Loihi is extremely sparse. “While the GPU performs billions of computations per second, every second, the neuromorphic chip only processes changes in the audio signal, and neuron cores inside Loihi communicate efficiently with spikes,” he said.

Davies presented a fair amount of detail in a break-out discussion that is best watched directly.

Confidential Computing

Efforts to maintain data security and confidentiality are hardly new. Intel presented its ongoing efforts in that arena, which involves big bets on federated learning, homomorphic encryption, and recently the launch of the Private AI Collaborative Research Institute “to advance and develop technologies in privacy and trust for decentralized artificial intelligence.”

“Today, encryption is used as a solution to protect data while it’s being sent across the network and while it’s stored, but data can still be vulnerable when it’s being used. Confidential computing allows data to be protected while in use,” said Jason Martin, principal engineer in the Security Solutions Lab and manager of the Secure Intelligence Team.

“Trusted execution environments provide a mechanism to perform confidential computing. They’re designed to minimize the set of hardware and software you need to trust to keep your data secure. To reduce the software that you must rely on, you need to ensure that other applications or even the operating system can’t compromise your data. Even if malware is present. Think of it as a safe that protects your valuables even from an intruder in the building,” he said.

Federated learning is one approach to maintaining security.

“In many industries such as retail, manufacturing, healthcare and financial services, the largest data sets are locked up in what are called data silos. These data silos may exist to address privacy concerns or regulatory challenges, or in some cases that data is just too large to move. However, these data silos create obstacles when using machine learning tools to gain valuable insights from the data. Medical imaging is an example where machine learning has made advances in identifying key patterns in MRIs such as the location of brain tumors, but is inhibited by these concerns. Intel labs has been collaborating with the Center for Biomedical Image Computing and Analytics at the University of Pennsylvania Perelman School of Medicine on federated learning,” said Martin.

With federated learning, the computation is split such that each hospital trains the local version of the algorithm on their data at the hospital, and then sends what they learned to a central aggregator. This combines the models from each hospital into a single model without sharing the data. A study by UPenn and Intel showed federated learning “could train a deep learning model to within 99% of the accuracy of the same model trained with the traditional non-private method. We also showed that institutions did on average 17% better when trained in the Federation, compared to training with only their own data,” said Martin.

Homomorphic encryption is a new cryptosystem that allows applications to perform computation directly on encrypted data without exposing the data itself. The technology is emerging as a leading method to protect privacy of data when delegating computation. For example, these cryptographic techniques allow cloud computation directly on encrypted data without the need for trusting the cloud infrastructure, cloud service or other tenants.

“It turns out in fully homomorphic encryption, you can perform those basic operations on encrypted data using any algorithm of arbitrary complexity. And then when you decrypt the data, those operations are applied to the plaintext,” said Martin.

The challenge with homomorphic encryption is dataset size. “However, there are challenges that hinder the adoption of fully homomorphic encryption. In traditional encryption mechanisms to transfer and store data, the overhead is relatively negligible. But with fully homomorphic encryption, the size of homomorphic ciphertext is significantly larger than plain data, in some cases 1,000 to 10,000 times larger,” he said.

Machine Programming

Programs creating programs is a much-discussed topic in HPC and IT generally. Software development is hard, detailed work, and seldom done perfectly on the first pass. According to one study, programmers in the U.S. spend 50 percent of their time debugging.

“Think about machine programming helping us in two simultaneous directions,” said Justin Gottshlich, principal engineer and lead for Intel’s machine programming research group. “First, we want the machine programming systems to help coders and non-coders become more productive. Second, we want to ensure that the machine programming systems that do this are producing high quality code that’s fast, secure.”

At Labs Day, Intel unveiled ControlFlag – a machine programming research system that can autonomously detect errors in code. In preliminary tests, ControlFlag trained and learned novel defects on over 1 billion unlabeled lines of production-quality code.

“Let me describe two concrete systems that our machine programming team has developed and is working to integrate into production quality systems, just as a reference. We’ve built over a dozen of these systems now, but in the interest of time, we’ll just talk about these two. The first is a machine programing system that can automatically detect performance bugs. This system rich actually invents the test to detect the performance issues. [H]istorically, these tests have been created by humans. With our system, the human doesn’t write a single line of code. On top of that, the same system can then automatically adapt those invented tests to different hardware architectures,” said Gottshlich.

“The second system that we’ve built also attempts to find bugs. But this system isn’t restricted to just performance bugs; it can find a variety of bugs. What’s so exciting is that unlike the prior solutions of finding bugs, the machine programming system that we’ve built, and we literally just built this a few months ago, learns to identify bugs without any human supervision. That means it learns without any human generated labels of data. Instead, what we do is we send this system out into the world to learn about code. When it comes back, it has learned a number of amazing things, we then point it at a code repository, even code that is production quality and has been around for decades.”

For fuller peak into Intel Labs Day: https://newsroom.intel.com/press-kits/intel-labs-day-2020/#gs.mzg9zq

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the large research community it supports, it also sought to optimize Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impact on how large a piece of the DL pie a user can finally enj Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX developer kit. The Clara partnerships announced during... Read more…

AWS Solution Channel

Research computing with RONIN on AWS

To allow more visibility into and management of Amazon Web Services (AWS) resources and expenses and minimize the cloud skills training required to operate these resources, AWS Partner RONIN created the RONIN research computing platform. Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU Technology Conference (GTC), held virtually once more due to the pandemic, the company unveiled its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. The announcement of the new... Read more…

San Diego Supercomputer Center Opens ‘Expanse’ to Industry Users

April 15, 2021

When San Diego Supercomputer Center (SDSC) at the University of California San Diego was getting ready to deploy its flagship Expanse supercomputer for the larg Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU technology. The University of Cambridge will expand... Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impa Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX developer kit. The Clara partnerships announced during... Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU Technology Conference (GTC), held virtually once more due to the pandemic, the company unveiled its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. The announcement of the new... Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computing. Nvidia is pitching the DPU as an active engine... Read more…

Nvidia’s Newly DPU-Enabled SuperPod Is a Multi-Tenant, Cloud-Native Supercomputer

April 12, 2021

At GTC 2021, Nvidia has announced an upgraded iteration of its DGX SuperPods, calling the new offering “the first cloud-native, multi-tenant supercomputer.” Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire