Intel Labs Day – Quantum, Neuromorphic, Integrated Photonics, “Pursuit of 1000X” and More

By John Russell

December 8, 2020

It’s fascinating to see what a major company has percolating in the lab as it reflects the company’s mid- and longer-term expectations. At Intel Labs Day last week, the chip giant provided a glimpse into five priorities it is pursuing. The list won’t surprise you – quantum computing, neuromorphic computing, integrated photonics, machine programming (think machines programming machines), and what Intel calls confidential computing (think security).

Lab director Rich Uhlig was the master of ceremonies in what was a carefully-scripted and smoothly-run event in this new era of online conferences. While much of the material was familiar, there were deeper dives in all topics as well as a new product announcement in quantum (Horse Ridge 2 controller chip), impressive benchmarks in neuromorphic computing (v. CPU/GPUs), and a few noteworthy collaborators discussing various joint projects.

The unifying concept for the day was unprecedented data growth. There’s an expectation we’ll generate on the order of 175 zetabytes in 2025. As one zettabyte equals 1,000 exabytes, Intel themed its agenda “In pursuit of 1000X: Disruptive Research for the Next Decade in Computing.”

Said Uhlig in his opening remarks, “The first step is to set an ambitious goal with an understanding that we need multiple orders of magnitude improvement in along several vectors of technology spanning: interconnects; compute and memory; and in how we program in secure systems. As a shorthand, let’s call this our pursuit of 1000X”

Here are a few highlights from various presentations.

Integrated Photonics

The question has long been when, not if, optical components will be needed inside chips and servers to achieve greater bandwidth. James Jaussi, senior principal engineer in Intel’s photonics lab, said, “[Photonics] has come a long way, however, because of the current cost, [the] physical size of the silicon photonics modules, and operating power, optical IO has not pushed into the shorter distance interconnects and this is our next big hurdle.”

Intel’s vision is for integrated photonics to drive the cost and the footprint down, he said: “We strive to have the capability of scaling IO volumes from millions to billions, 1,000x increase. Future optical links will make all IO connections emanate directly from our server packages reaching fully across the datacenter.” Jaussi pointed out the following progress points:

  • Micro-ring modulators. Intel has miniaturized the modulator by a factor of more than 1,000, thereby eliminating a key barrier to integrating silicon photonics onto a compute package.
  • All-silicon photodetector. The industry has long believed silicon has virtually no light detection capability in the 1.3-1.6um wavelength range. Intel showcased research that proves otherwise with lower cost as a main benefit.
  • Integrated semiconductor optical amplifier. Targeting power reduction, it’s now possible to make integrated semiconductor optical amplifiers with the same material used for the integrated laser.
  • Integrated multi-wavelength lasers. Using wavelength division multiplexing (WDM), separate wavelengths can be used from the same laser to convey more data in the same beam of light.
  • Integration: Intel is the only company that has demonstrated integrated multi-wavelength lasers and semiconductor optical amplifiers, all-silicon photodetectors, and micro-ring modulators on a single technology platform tightly integrated with CMOS silicon.

“We feel these building blocks will help fundamentally change computer IO and revolutionize future datacenter communication,” said Jaussi, who also noted Intel’s disclosure last February of 3D stacked CMOS circuits interfacing directly with photonics by stacking two ICS, one on top of the other. “There is a clear inflection point between optical and electrical approaching,” he said.

Of course, many companies, new (Ayar Labs) and old (Nvidia) are feverishly tackling optical performance and packaging issues. The race is on.

Quantum Computing

Among the noisy quantum community, Intel had been largely quiet until the last year or so. It is focused on silicon-based spin qubit technology than can be fabbed using Intel’s existing CMOS manufacturing expertise.  Anne Matsuura, director of quantum architecture, and Jim Clarke, Intel director of quantum hardware and components group, shared presentation duties.

In many ways, Intel has stepped more cautiously into the quantum computing waters.

“We believe that commercial scale quantum computers will enable simulation of these materials so that in the future we can also design materials, chemicals and drugs with properties that we desire,” said Matsuura during the opening session, but quickly added, “Today’s 100 qubits or even thousands of qubits will not get us there. [We] will need a full stack, commercial-scale quantum computing system of millions of qubits to attain quantum practicality for this type of ambitious problem solving.”

Spin qubits promise many significant advantages (coherency time and scalable manufacturing among them) but present the same control drawbacks as all semiconductor-based qubits in being highly susceptible to noise interference. That means they must operate in near-zero degree (K) environments inside dilution refrigerators. To get the microwave control signals to the qubits requires cables to be inserted into those refrigerators. Stuffing a million coax cables into one of these refrigerators is a daunting, perhaps undoable task.

Intel is tackling that problem from a different direction with a integrated cryo-controller chip, Horse Ridge (coldest spot in Oregon), which can be placed inside the fridge close to the chip. It’s a significant change and a potential game-changer. In one of the few news items at Labs Day, Intel announced Horse Ridge 2.

New features enable:

  • Qubit readout. The function grants the ability to read the current qubit state. The readout is significant, as it allows for on-chip, low-latency qubit state detection without storing large amounts of data, thus saving memory and power.
  • Multigate pulsing.The ability to simultaneously control the potential of many qubit gates is fundamental for effective qubit readouts and the entanglement and operation of multiple qubits, paving the path toward a more scalable system.

Here’s Intel’s description:

“The addition of a programmable microcontroller operating within the integrated circuit enables Horse Ridge II to deliver higher levels of flexibility and sophisticated controls in how the three control functions are executed. The microcontroller uses digital signal processing techniques to perform additional filtering on pulses, helping to reduce crosstalk between qubits.

“Horse Ridge II is implemented using Intel 22nm low-power FinFET technology (22FFL) and its functionality has been verified at 4 kelvins. Today, a quantum computer operates in the millikelvin range – just a fraction of a degree above absolute zero. But silicon spin qubits – the underpinning of Intel’s quantum efforts – have properties that could allow them to operate at temperatures of 1 kelvin or higher, which would significantly reduce the challenges of refrigerating the quantum system.”

It will be interesting to see if Horse Ridge could be used by other quantum computing companies. Intel hasn’t said it wouldn’t sell the chip to others.

Matsuura said, “Scaling is in Intel’s DNA. It is inherent to how we approach technology innovation, and quantum is no different. There are key areas that Intel’s quantum research program is focused on: spin qubit technologies, cryogenic control technology, and full stack innovation. Each of these areas addresses critical challenges that lie on the path to scaling quantum, and Intel is tackling each systematically to achieve scaling.

“We are introducing high volume, high throughput capabilities for our spin qubits with a cryo-probe. This is a one of a kind piece of equipment that helps us test our chips on CMOS wafers in our fabs very rapidly. I mean, we’re talking hours instead of days with respect to time to information; we’re essentially mimicking the information turn cycle that we have in standard transistor research and development. With the cryo-probe, we can get test data and learnings from our research devices 1000x faster, significantly accelerating qubit develop.”

Neuromorphic Computing

If practical quantum computing still seems far off (and it does), neuromorphic computing seems much closer, even if only in a limited number of applications. Intel is an active player and its Loihi chip, Pohoiki Springs system, and Intel Neuromorphic Research Community (100-plus members) – all taken together – represent one of the biggest vendor footprints in neuromorphic computing.

Mike Davies, director of Intel’s Neuromorphic Lab, covered a great deal of ground. While no new neuromorphic products were announced, he reviewed the technology some detail and INRC (Intel Neuromorphic Research Community) member Accenture talked about three of its neuromorphic computing projects. He also spent a fair amount of time reviewing benchmark data versus both CPUs and Nvidia GPUs.

“Our focus has been on benchmarking Loihi’s performance against conventional architectures, so we can build confidence that neuromorphic chips in general can deliver on the promise. That said, over the past year, several other neuromorphic chips have been announced that sound to also be mature and optimized enough to give good results. That’s exciting because it means we can start comparing the strengths and weaknesses of different neuromorphic architectural and design choices. This kind of competitive benchmarking will accelerate progress in the field; we truly welcome healthy competition from other platforms,” said Davies.

By way of review, neuromorphic computing attempts to mimic how the brain’s neurons work. Roughly, this means using spiking neural networks (SNNs) to encode and accomplish computation instead of classic von Neumann processor-and-memory computing. The brain, of course is famous for working on about 20 watts.

Davies provided succinct summary:

“To date the INRC has generated over 40 peer reviewed publications many with quantified results confirming the promise of the technology to deliver meaningful gains. Several robotics workloads show 40 to 100 times lower power consumption on Loihi compared to conventional solutions. That includes an adaptive robotic arm application, a tactile sensing network the processes input from a new artificial skin technology, and a simultaneous localization and mapping workload or slam as it’s called.

“On our large scale Pohoiki Springs system we demonstrated ‘similarity search’ running with 45 times lower power and over 100 times faster than a CPU implementation. Loihi [can] also solve hard optimization problems such as constraint satisfaction, and graph search over 100 times faster than a CPU with over 1,000 times lower energy. This means that future neuromorphic devices like drones could solve planning and navigation problems continuously in real time.

“All of this progress and results give us a lot of confidence that neuromorphic computing, in time, will enable groundbreaking capabilities over a wide range of applications. In the near-term, the cost profile of the technology will limit applications to either the small scale such as an edge devices and sensors, or to less cost-sensitive applications like satellites and specialized robots. Over time, we expect innovations in memory technologies to drive down the cost allowing neuromorphic solutions to reach an expanding set of intelligent devices that need to process real time data where size, weight and power are all constraints.”

Alex Kass of Accenture, an INRC member, presented three projects involving voice command recognition, full body gesture classification, and adaptive control for mobile robots. “We focused on problems where edge AI is needed to complement cloud based capabilities. We look for problems that are difficult to solve with the CPUs or GPUs that are common today, and we most prefer to focus on capabilities that can be applied across many business contexts,” he said. One use case is in automotive.

Currently, AI hardware is too power hungry, which can impact vehicle performance and limit the possible applications, said Tim Shea, researcher with Accenture Labs. Smart vehicles need more efficient edge AI devices to meet the demand. Using edge AI devices to compliment cloud-based AI could also increase responsiveness and improve reliability when connectivity is poor.

Shea said, “We’ve built a proof of concept system with one of our major automotive partners to demonstrate that neuromorphic computing can make cars smarter without draining the batteries. We’re using Intel’s Kapoho Bay (version of Loihi chip) to recognize voice commands that an owner would give to their vehicle. The Kapoho Bay is a portable and extremely efficient neuromorphic research device for AI at the edge. We’re comparing that proof of concept system against a standard approach using a GPU.”

In developing the POC system, Accenture trained spiking neural networks to differentiate between command phrases and then ran the trained networks on the Kapoho Bay. “We connected the Kapoho Bay to a microphone, and a controller similar to the electronic control units that operate various functions of a smart vehicle. We’re targeting commands that reflect features that can be accessed from outside of the smart vehicle, such as “park here,” or “unlock passenger door,” said Shea. “These functions also need to be energy efficient, so the vehicle can remain responsive even when parked for long stretches of time.”

The first step, according to Shea, was getting the system to recognize simple commands such as “lights on,” “start engine,” etc. “Using a combination of open source voice recordings and a smaller sample of specific commands, we can approximate the kinds of voice processing needed for smart vehicles. We tested this approach by comparing our trained spiking neural networks running on Intel’s neuromorphic research cloud against a convolutional neural network running on a GPU.”

Both systems achieved acceptable accuracy recognizing our voice commands. “But we found that the neuromorphic system was up to one thousand times more efficient than the standard AI system with a GPU. This is extremely impressive, and it’s consistent with the results from other labs,” said Shea.

The dramatic improvement in energy efficiency, said Shea, derives from the fact that computation on the Loihi is extremely sparse. “While the GPU performs billions of computations per second, every second, the neuromorphic chip only processes changes in the audio signal, and neuron cores inside Loihi communicate efficiently with spikes,” he said.

Davies presented a fair amount of detail in a break-out discussion that is best watched directly.

Confidential Computing

Efforts to maintain data security and confidentiality are hardly new. Intel presented its ongoing efforts in that arena, which involves big bets on federated learning, homomorphic encryption, and recently the launch of the Private AI Collaborative Research Institute “to advance and develop technologies in privacy and trust for decentralized artificial intelligence.”

“Today, encryption is used as a solution to protect data while it’s being sent across the network and while it’s stored, but data can still be vulnerable when it’s being used. Confidential computing allows data to be protected while in use,” said Jason Martin, principal engineer in the Security Solutions Lab and manager of the Secure Intelligence Team.

“Trusted execution environments provide a mechanism to perform confidential computing. They’re designed to minimize the set of hardware and software you need to trust to keep your data secure. To reduce the software that you must rely on, you need to ensure that other applications or even the operating system can’t compromise your data. Even if malware is present. Think of it as a safe that protects your valuables even from an intruder in the building,” he said.

Federated learning is one approach to maintaining security.

“In many industries such as retail, manufacturing, healthcare and financial services, the largest data sets are locked up in what are called data silos. These data silos may exist to address privacy concerns or regulatory challenges, or in some cases that data is just too large to move. However, these data silos create obstacles when using machine learning tools to gain valuable insights from the data. Medical imaging is an example where machine learning has made advances in identifying key patterns in MRIs such as the location of brain tumors, but is inhibited by these concerns. Intel labs has been collaborating with the Center for Biomedical Image Computing and Analytics at the University of Pennsylvania Perelman School of Medicine on federated learning,” said Martin.

With federated learning, the computation is split such that each hospital trains the local version of the algorithm on their data at the hospital, and then sends what they learned to a central aggregator. This combines the models from each hospital into a single model without sharing the data. A study by UPenn and Intel showed federated learning “could train a deep learning model to within 99% of the accuracy of the same model trained with the traditional non-private method. We also showed that institutions did on average 17% better when trained in the Federation, compared to training with only their own data,” said Martin.

Homomorphic encryption is a new cryptosystem that allows applications to perform computation directly on encrypted data without exposing the data itself. The technology is emerging as a leading method to protect privacy of data when delegating computation. For example, these cryptographic techniques allow cloud computation directly on encrypted data without the need for trusting the cloud infrastructure, cloud service or other tenants.

“It turns out in fully homomorphic encryption, you can perform those basic operations on encrypted data using any algorithm of arbitrary complexity. And then when you decrypt the data, those operations are applied to the plaintext,” said Martin.

The challenge with homomorphic encryption is dataset size. “However, there are challenges that hinder the adoption of fully homomorphic encryption. In traditional encryption mechanisms to transfer and store data, the overhead is relatively negligible. But with fully homomorphic encryption, the size of homomorphic ciphertext is significantly larger than plain data, in some cases 1,000 to 10,000 times larger,” he said.

Machine Programming

Programs creating programs is a much-discussed topic in HPC and IT generally. Software development is hard, detailed work, and seldom done perfectly on the first pass. According to one study, programmers in the U.S. spend 50 percent of their time debugging.

“Think about machine programming helping us in two simultaneous directions,” said Justin Gottshlich, principal engineer and lead for Intel’s machine programming research group. “First, we want the machine programming systems to help coders and non-coders become more productive. Second, we want to ensure that the machine programming systems that do this are producing high quality code that’s fast, secure.”

At Labs Day, Intel unveiled ControlFlag – a machine programming research system that can autonomously detect errors in code. In preliminary tests, ControlFlag trained and learned novel defects on over 1 billion unlabeled lines of production-quality code.

“Let me describe two concrete systems that our machine programming team has developed and is working to integrate into production quality systems, just as a reference. We’ve built over a dozen of these systems now, but in the interest of time, we’ll just talk about these two. The first is a machine programing system that can automatically detect performance bugs. This system rich actually invents the test to detect the performance issues. [H]istorically, these tests have been created by humans. With our system, the human doesn’t write a single line of code. On top of that, the same system can then automatically adapt those invented tests to different hardware architectures,” said Gottshlich.

“The second system that we’ve built also attempts to find bugs. But this system isn’t restricted to just performance bugs; it can find a variety of bugs. What’s so exciting is that unlike the prior solutions of finding bugs, the machine programming system that we’ve built, and we literally just built this a few months ago, learns to identify bugs without any human supervision. That means it learns without any human generated labels of data. Instead, what we do is we send this system out into the world to learn about code. When it comes back, it has learned a number of amazing things, we then point it at a code repository, even code that is production quality and has been around for decades.”

For fuller peak into Intel Labs Day: https://newsroom.intel.com/press-kits/intel-labs-day-2020/#gs.mzg9zq

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Digging into the Atos-Nimbix Deal: Big US HPC and Global Cloud Aspirations. Look out HPE?

August 2, 2021

Behind Atos’s deal announced last week to acquire HPC-cloud specialist Nimbix are ramped-up plans to penetrate the U.S. HPC market and global expansion of its HPC cloud capabilities. Nimbix will become “an Atos HPC c Read more…

Berkeley Lab Makes Strides in Autonomous Discovery to Tackle the Data Deluge

August 2, 2021

Data production is outpacing the human capacity to process said data. Whether a giant radio telescope, a new particle accelerator or lidar data from autonomous cars, the sheer scale of the data generated is increasingly Read more…

Verifying the Universe with Exascale Computers

July 30, 2021

The ExaSky project, one of the critical Earth and Space Science applications being solved by the US Department of Energy’s (DOE’s) Exascale Computing Project (ECP), is preparing to use the nation’s forthcoming exas Read more…

What’s After Exascale? The Internet of Workflows Says HPE’s Nicolas Dubé

July 29, 2021

With the race to exascale computing in its final leg, it’s natural to wonder what the Post Exascale Era will look like. Nicolas Dubé, VP and chief technologist for HPE’s HPC business unit, agrees and shared his vision at Supercomputing Frontiers Europe 2021 held last week. The next big thing, he told the virtual audience at SFE21, is something that will connect HPC and (broadly) all of IT – into what Dubé calls The Internet of Workflows. Read more…

How UK Scientists Developed Transformative, HPC-Powered Coronavirus Sequencing System

July 29, 2021

In November 2020, the COVID-19 Genomics UK Consortium (COG-UK) won the HPCwire Readers’ Choice Award for Best HPC Collaboration for its CLIMB-COVID sequencing project. Launched in March 2020, CLIMB-COVID has now resulted in the sequencing of over 675,000 coronavirus genomes – an increasingly critical task as variants like Delta threaten the tenuous prospect of a return to normalcy in much of the world. Read more…

AWS Solution Channel

Data compression with increased performance and lower costs

Many customers associate a performance cost with data compression, but that’s not the case with Amazon FSx for Lustre. With FSx for Lustre, data compression reduces storage costs and increases aggregate file system throughput. Read more…

KAUST Leverages Mixed Precision for Geospatial Data

July 28, 2021

For many computationally intensive tasks, exacting precision is not necessary for every step of the entire task to obtain a suitably precise result. The alternative is mixed-precision computing: using high precision wher Read more…

Digging into the Atos-Nimbix Deal: Big US HPC and Global Cloud Aspirations. Look out HPE?

August 2, 2021

Behind Atos’s deal announced last week to acquire HPC-cloud specialist Nimbix are ramped-up plans to penetrate the U.S. HPC market and global expansion of its Read more…

How UK Scientists Developed Transformative, HPC-Powered Coronavirus Sequencing System

July 29, 2021

In November 2020, the COVID-19 Genomics UK Consortium (COG-UK) won the HPCwire Readers’ Choice Award for Best HPC Collaboration for its CLIMB-COVID sequencing project. Launched in March 2020, CLIMB-COVID has now resulted in the sequencing of over 675,000 coronavirus genomes – an increasingly critical task as variants like Delta threaten the tenuous prospect of a return to normalcy in much of the world. Read more…

What’s After Exascale? The Internet of Workflows Says HPE’s Nicolas Dubé

July 29, 2021

With the race to exascale computing in its final leg, it’s natural to wonder what the Post Exascale Era will look like. Nicolas Dubé, VP and chief technologist for HPE’s HPC business unit, agrees and shared his vision at Supercomputing Frontiers Europe 2021 held last week. The next big thing, he told the virtual audience at SFE21, is something that will connect HPC and (broadly) all of IT – into what Dubé calls The Internet of Workflows. Read more…

IBM and University of Tokyo Roll Out Quantum System One in Japan

July 27, 2021

IBM and the University of Tokyo today unveiled an IBM Quantum System One as part of the IBM-Japan quantum program announced in 2019. The system is the second IB Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

Will Approximation Drive Post-Moore’s Law HPC Gains?

July 26, 2021

“Hardware-based improvements are going to get more and more difficult,” said Neil Thompson, an innovation scholar at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). “I think that’s something that this crowd will probably, actually, be already familiar with.” Thompson, speaking... Read more…

With New Owner and New Roadmap, an Independent Omni-Path Is Staging a Comeback

July 23, 2021

Put on a shelf by Intel in 2019, Omni-Path faced a uncertain future, but under new custodian Cornelis Networks, OmniPath is looking to make a comeback as an independent high-performance interconnect solution. A "significant refresh" – called Omni-Path Express – is coming later this year according to the company. Cornelis Networks formed last September as a spinout of Intel's Omni-Path division. Read more…

Chameleon’s HPC Testbed Sharpens Its Edge, Presses ‘Replay’

July 22, 2021

“One way of saying what I do for a living is to say that I develop scientific instruments,” said Kate Keahey, a senior fellow at the University of Chicago a Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

Leading Solution Providers

Contributors

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Q&A with Jim Keller, CTO of Tenstorrent, and an HPCwire Person to Watch in 2021

April 22, 2021

As part of our HPCwire Person to Watch series, we are happy to present our interview with Jim Keller, president and chief technology officer of Tenstorrent. One of the top chip architects of our time, Keller has had an impactful career. Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Senate Debate on Bill to Remake NSF – the Endless Frontier Act – Begins

May 18, 2021

The U.S. Senate today opened floor debate on the Endless Frontier Act which seeks to remake and expand the National Science Foundation by creating a technology Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire