Glimpse into ORNL Quantum Science Center Efforts to Find the Elusive Majorana and Much More

By John Russell

August 16, 2022

The Quantum Science Center (QSC), headquartered at Oak Ridge National Laboratory, is one of five such centers created by the National Quantum Initiative Act in 2018 and run by the Department of Energy. They all have distinct and overlapping goals. That’s sort of the point, to bring both focus and cooperation, and a heavy dose of industry participation to advance quantum information sciences broadly and quantum computing directly in the U.S.

All of the centers have ambitious goals, but perhaps none is more ambitious than QSC’s – to help deliver topological quantum computing. This approach depends on an as-yet unproven particle, Marjorana, one of a class of mysterious non-abelian anyons that follow non-abelian statistics. We won’t dig deeply into that beyond saying such a quantum computer should be extremely resistant to error – a very good thing – and may also be able to directly map physical problems onto quantum computers. The answers would have quantum laws baked in and be extremely accurate. Remember digital twins. Think quantum twins instead, with formidable predictive power.

Travis Humble, ORNL QSC. Credit: Carlos Jones, ORNL

The race for topological quantum computing is a bit of a gamble. There are skeptics. Microsoft has been the biggest champion of the topological approach and is a close QSC collaborator. Interestingly, in its effort to flesh out topological quantum computing, QSC is leveraging existing NISQ systems. Newly named director of QSC Travis Humble calls this, “using today’s quantum computers to try and build tomorrow’s quantum computers.”

Don’t be misled. There is a great deal more than chasing non-abelian particles going on at QSC which is digging into materials science, algorithm development, and sensors, although much of what is being done in these areas is intended to support development of topological computers (see QSC’s science thrusts below excerpted from the QSC website.)

Thrust 1: Quantum Materials Discovery and Development

Thrust 1 demonstrates and controls non-abelian anyon states relevant to Quantum Information Science (QIS) in real materials. These states are expected to exist in electronic materials with nontrivial topologies and magnetic systems with entangled quantum spins, and the topological protection and delocalization of the states that make them attractive for QIS applications can also make them difficult to probe and to understand. Thus, research in this thrust is focused on understanding and developing topological electronic materials, quantum spin systems, and quantum probes. Led by ORNL’s Michael McGuire

Thrust 2: Quantum Algorithms and Simulation

Thrust 2 achieves predictive capabilities for the study of strongly coupled quantum systems, including topological systems and quantum field theories, and develops and tests quantum algorithms for quantum limited sensors. QSC researchers are developing efficient, scalable, and robust quantum simulation and metrology algorithms, testing these algorithms in predictive dynamical quantum simulation and quantum sensing applications, and developing software tools to support algorithm analysis, optimization, and implementation. Led by LAN’s Andrew Sornborger

Thrust 3: Quantum Devices and Sensors for Discovery Science

Thrust 3 develops an understanding of fundamental sensing mechanisms in high-performance quantum devices and sensors. This understanding allows QSC researchers, working across the Center, to co-design new quantum devices and sensors with improved energy resolution, lower energy detection thresholds, better spatial and temporal resolution, lower noise, and lower error rates. Going beyond proof-of-principle demonstrations, the focus is on implementation of this hardware in specific, real-world applications. Led by Fermilab’s Aaron Chou

Humble emphasizes that QSC is off to a fast start, and like all of DOE’s QIS centers, focused on concrete deliverables. “We’ve (QSC) had a really good publication record. I think we’re up to like 111 peer reviewed publications as of last month. But in addition, we’re focusing a lot on invention disclosures and software copyrights because we see those as ways to get these ideas out in industry a little bit faster. It’s great to publish papers, but really the role of the centers is to act as engines of innovation within the big QIS ecosystem. So just publishing papers isn’t good enough, to be honest, we actually have to transition the technology.”

It is perhaps noteworthy that the QIS centers seem to be trying to carve out identities beyond the labs at which they are headquartered. Humble said, “You’re exactly right. There’s so much interest in this topic at the moment that anyone who has an institution is ill prepared to be able to take it all on. So for example, at Oak Ridge, we’re the lead for Quantum Science Center, but there are 17 partners overall, that are contributing to it, and honestly, if we took any one of them away, we’d end up with a gap in our capabilities.”

HPCwire recently talked with Humble about QSC’s expansive plans. The center has roughly 258 users – “Really I should call them members of the center. That includes everyone, our advisory boards, our students our staff or postdocs. I think the 250 number is probably stable right now.” – and its budget is fixed at about $25 million per year “so we’re not really growing the research portfolio,” said Humble.

Presented here is a portion of that wide-ranging conversation.

HPCwire: Maybe you could start by giving an overview. For example, the focus on topological quantum computing seems like a more distant goal compared to other centers. Also, maybe you could talk a little bit about key early term deliverables.

Humble: As you mentioned, the National Quantum Initiative act of 2018 directed Department of Energy to establish these NQIS research centers. They ended up selecting five, Quantum Science Center was one of them, headquartered at Oak Ridge. The other four were also headquartered at national laboratories, which is maybe not surprising in hindsight. I would say what makes the QSC distinct is we are focused on this question of how can we leverage material science to build better quantum computers, and the non-abelian anyons are a fundamental type of particle and we are trying to be the first to demonstrate that not only can we create those types of particles, but then use them for quantum computation. That’s a long-term goal.

I think for the first five years, we are probably going to get to the point of discovery of the material that can host these particles that; we’ve already got some really good candidates out there. It will take more time though to transform that into an actual quantum computer.

Overall, we have three topical research focuses, what we call thrust areas. These include material science; computational science – and really this is quantum computing algorithms and applications – and sensors. The efforts are all interrelated. Think about it this way. There’s this idea that I’ve got to first create these new types of quantum materials. But in order to confirm those quantum materials, I wouldn’t need new types of quantum sensors. But in order to create those quantum sensors, I’m going to need these new types of quantum algorithms. So in the end, it’s a big cycle of productivity. That’s the crux of the center, to keep that cycle of interaction going.

HPCwire: The focus on non-abelian anyon systems seems a more long-term goal. Other centers are working on better known and understood approaches such as trapped ions, superconducting, cold atoms. Does that mean that that the QSC is maybe the most future-looking of the of the centers? It doesn’t seem like there’s a near-term payoff here.

Humble: So we also are looking at the superconducting electronics and trapped ions, photonics, those are actually part of our thrust area on quantum computing, where we’re trying to use today’s quantum computers to gain insights into what are the materials and properties that we’re going to need to scale these things up. As you said, today’s technologies, superconducting, trapped ions, etc., they’re very good for proof of principle demonstrations. But I think everyone’s a little worried about how you could build a full-scale system. It gets very complex to manage all the resources that would be required to make that basically, production level. That’s where the new types of materials come in; they could actually reduce the complexity requirements of building these future systems. But it hinges on finding those non-abelian anyons.

So in this sense, I do agree that we are using today’s quantum computers to try and build tomorrow’s quantum computers. But the output along the way is that we are writing programs for superconducting devices, trapped ion devices, and we’re getting good results from that. And we’re building these sensors, which can actually be used today for detecting new types of materials. Even dark matter, we have a partner at Fermilab that is actually focused on looking for candidates for this type of dark matter that may be out there. All of that ends up being some of the output that we’re generating in the near term.

The idea of partnering with Microsoft as part of our center to build that future quantum computer, that’s probably on the 10-year timescale. I don’t think that we’re going to build a topological quantum computer in the next three years. Of course not. But seven, eight years from now, we may actually have some working prototypes. So it’s future oriented, yes, but maybe kind of intermediate-future.

HPCwire: What fundamental advantage does a topological quantum computer based on these non-abelian particles have over the other technologies?

Humble: Probably the leading source of error at the moment in the existing technologies is fluctuations in the control signals that are actually being used for these operations. And it turns out, that’s where the topological model is more resilient. It is less sensitive to those local control fluctuations. So I do think, to the first order, it would help get around that engineering challenge. Of course, you have to worry about what’s next. Right? So once you once you tamp that down, are you going to pick up another problem that’s even harder – that we don’t know yet. The theory says, yes, you’ll be able to build these types of quantum computers at larger sizes and operate them more efficiently if you have this type of material. But there’s a big question, you know, which is how that really turns out in practice. I think we won’t understand that until we build some of these prototypes and start getting feedback.

HPCwire: Will such a system be able to use the quantum ecosystem (middleware, programming tools, etc.) that’s rapidly growing now?

Humble:  I actually think that’s essential. My personal opinion is there will not be one technology that we build quantum computers from, in the same way that conventional computing has [been] functionalized, into memory, and compute, and bandwidth and all these things. We’re going to need the same thing on the quantum side. At the moment, we’re all focused on that one technology piece (compute) because we have to develop it. But once you get to a full system that’s really productive, it’s probably going to be a mixture of technologies, which means that your higher-level control systems and architecture have to be agnostic to the individual hardware.

I think of it in terms of Oak Ridge and HPC. You know, we invest so much in building up the code base and the tools for all this – if we had to rewrite that every time we change the architecture, that would be a failing proposition. I think the same is going to be true for quantum. Even if we can create a new type of quantum computer, it’s going to have to be backwards compatible with some of the [tools and systems].

HPCwire: Is it your sense that quantum computing will end up being just a piece of the computing writ large puzzle in the sense that most applications will be parsed into classical portions and quantum portions and be run on hybrid systems?

Humble: Yeah, so that’s a big question right now. There’s this hybrid model, right, where you’ve got the classical workload, and maybe you’re offloading it or some of it to a quantum accelerator type of thing. I really like that. Oak Ridge really likes that idea. But there is an alternative, which is that I use the quantum computer as a standalone device, almost like a special purpose machine that only solves chemistry problems, or only solves physics problems. It almost becomes a proxy for an experiment itself. That’s a very different model, not one that we use conventional computers for because they don’t have the same physics as the experiment, right? At the moment, we’re kind of looking at both, you know, trying to figure out the advantages for each approach.

HPCwire: You’re just talking about taking advantage of the probabilistic nature of quantum computing to more closely mimic the problem? This the Richard Feynman idea – simulating on a quantum system that essentially acts the same way as the physical world?

Inside ORNL’s Spallation Neutron Source facility (credit: HPCwire)

Humble: That’s it. [For example] we have the Spallation Neutron Source here at Oak Ridge. We would normally synthesize the material, take it up to the SNS, put it in there, they would characterize it, and we get out a neutron spectrum. What if I could just program in the material whose spectrum I wanted to see onto a quantum computer? I would have almost a quantum twin of what the SNS facility does, what it outputs to me. It should be a fairly accurate representation of what I should expect. That’s different at some level than just running it on an HPC system.

HPCwire: You hear people, like Nvidia, say, “Well, look, you’ll never be able to do matrix multiplies as efficient as you can on a GPU.” And maybe that’s true, but it might not make any difference if you’re just doing the direct simulation that essentially mimics the material.

Humble: We’ve gotten some early examples where we have used either the D-Wave quantum computer or the IBM quantum computer to simulate model materials and actually get out results that we can match to experiment. The thing is, though, those models are so small right now that it’s really not anything to surpass where we are today. But it’s headed in that direction. So that’s really got me excited.

HPCwire: Let’s shift gears slightly and talk about QSC’s goals and near-term deliverable for the three focus areas: materials, algorithms and sensors.

Humble: In the materials area, we want to demonstrate that topological non-abelian anyons are present in the materials that we’re creating. There’s a particular type of measurement there called braiding that no one has demonstrated yet in these materials. So we want to be the first to do that. In the sensing area, we actually want to be able to build sensors, arrays of sensors, that can detect quantum states. This would be precisely the types of sensors that we need to look for the anyons and other types of quantum material characterization. It would also be a handoff point to the high-energy physics community for the dark matter search. So in terms of a five-year goal, it’s really about demonstrating a new capability for quantum sensing, based on these multi-array sensors.

In the computing area, we’ve actually picked a couple of different scientific domains – materials chemistry, nuclear physics, and high energy physics – where we are attempting to show that you can use today’s quantum computers to solve some of those scientific problems, not necessarily surpassing state of the art. That gets into this whole quantum advantage question, that can be a stretch goal for us right now to demonstrate quantum advantage. But just to show the broad feasibility of using quantum computing, specifically quantum simulation types of calculation, where you’re doing what we were saying earlier, mimicking the quantum system, doing that across all these areas, to validate that this is a good path forward.

HPCwire: Isn’t this similar to what D-Wave does for optimization problems? It’s not a physical representation, exactly, of the system, but it is similar.

D-Wave Advantage System

Humble: It depends on the problem you choose. Optimization is such a big field already, whether you’re thinking about like operation logistics or even recommendation systems. When you map those problems onto the quantum platforms, there’s this encoding, a translations step. In that case, it really is a fundamentally different representation of the problem that you’re solving but you’re hoping that the solution ends up mapping back to the problem you originally tried.

In these scientific areas, though, we are actually starting off with quantum mechanical models that translate directly into the physical systems that we’re using. And that’s honestly why we chose them, because we think those are going to have the lowest overhead for implementation, and therefore the best chance of demonstrating some type of advantage.

HPCwire: Even with those problems in which you can directly map them to the system, I’m guessing you’ll still get a distribution of results. You have to run the problem, not once, but many times to find some distribution of answers, and pick from among those. But with a non-abelian based system, the distribution will be more reflective of what’s actually happening in nature as opposed to also reflecting noise in the system? Do I have that right?

Humble: Exactly right. The other unique wrinkle here is that sometimes the solution is actually a distribution. So it may not just be a single value you’re looking for, it could actually have produced a distributed probability. Quantum is perfect for solving those types of problems. What ends up happening is you sample from your device to get a representation of it. Then the question becomes, how many samples do you take and how good was that representation? It all ends up looking a lot like probabilistic computing at some level.

HPCwire: You’ve said the center also looks at other technologies like superconducting or ion trap or these things. What does that entail?

IBM Eagle Quantum QPU

Humble: A good example is IBM, one of our partners within the center. We actually work with them on these application areas, trying to map materials problems onto their devices, the IBM devices, and they’re providing us feedback on what is the best way to do the mapping and mitigate against the noise. The output is we get a really nice publication with them demonstrating the feasibility of solving science problems on today’s devices. IBM is invested in that because they want to understand the capabilities of their system. We’re happy with the partnership because they’re giving us access to some of their best systems for this purpose. We’re doing the same thing with a company called ColdQuanta. They’re a small startup that actually is getting a lot bigger. They’re using a different technology based on cold atoms that get trapped in electromagnetic fields. But it’s the same idea. We’re working with them on how we can provide feedback to make that system better. In this case, actually, for quantum sensing algorithms.

HPCwire: Is QSC’s near-term output primarily peer-reviewed papers that sort of demonstrate what you’ve accomplished so far?

Humble: We’ve had a really good publication record today, I think we’re up to like 111 peer reviewed publications as of last month. But in addition, we’re focusing a lot on invention disclosures and software copyrights because we see those as ways to get these ideas out in industry a little bit faster. It’s great to publish papers, but really the role of the centers is to act as engines of innovation within this big QIS ecosystem.

So just publishing papers isn’t good enough, to be honest, we actually have to transition the technology. With Microsoft, we’re actually taking on some of the higher risk materials. And once they get vetted by us, we’ve got mechanisms in place to hand them off so that Microsoft can look into developing them further. With some of our university partners, Purdue is a big partner here, they’re actually training the students on the materials and devices were making, and then they’re going off into industry and picking up positions where they can now, you know, transfer this these ideas.

HPCwire: One of the challenges for external observers is figuring out how the various quantum research programs fit together. Even just at ORNL. I’m thinking of Rafael Pooser who for a while was working on a DOE quantum testbed and Nick Peters, who’s working on quantum networking.

Humble: Within Oak Ridge, we actually have multiple programs funded by Department of Energy and elsewhere to support development of quantum. QSC is one of those. I think of QSC as the focal point for transitioning basic science into the more applied areas. So whether it’s partnership with industry or workforce development, QSC is our focal point there. But we [ORNL] also have our QIS section. This is the one that Nick Peters leads focused on computing, sensing, networking, and core capabilities in that area. We also have a quantum materials program funded directly from Department of Energy. And we have a quantum computing user program that runs out of the OLCF (Oak Ridge Leadership Computing Facility) which is providing access to commercial quantum computing systems.

Then we have all the other user facilities such as the SNS and CNMS (Center for Nanophase Material Sciences) and these nanoscale facilities. So internally, it’s almost like a roundtable of these different stakeholders in this field of quantum. The goal, though, is to keep up with DOE’s priorities in this area. Even before the National Quantum Initiative Act, a letter was sent out emphasizing QIS as a priority across the DOE Office of Science. QSC is kind of a focal point for all of this, but by far, not the only piece.

HPCwire: How do the groups interact? Is it just ad hoc?

Humble: No, we’re much better coordinated. If you think of it as like a Venn diagram, the QIS section and the QSC are actually overlapping each other substantially. But there are parts of the Oakridge quantum research portfolio that are not within QSC, networking, for example. So QSC doesn’t have within its thrust areas, any networking activity. Nick is pursuing quantum networking through a second, a separate DOE program in that area. Now, of course, we’re trying to figure out how do they leverage each other? How can you build up the quantum networks, quantum facilities, and then the quantum science center? But it’s really much more tightly coordinated, because it’s all the same people, all the same laboratories?

HPCwire: Are you still able to do research?

Humble: Yes. I’ve got a couple programs, all of them focused on quantum computing. Some of them are focused on software for quantum computing – so building the compilers and languages that can do this hardware-agnostic programming we’re talking about. Then the other on developing applications in the chemistry space. How do I do quantum chemistry using quantum computers is a really interesting problem because the chemistry community is in some ways perfect to take on quantum computing; they already understand that mathematics and the problem space is well set up. But getting those problems onto these computers is difficult because of the technology limits, and so we spent a lot of time looking for the best ways to program things.

HPCwire: You mentioned a user program separate from QSC. What’s that all about?

Humble: I also manage the quantum computing user program, which is this part of OLCF, that’s providing access to commercial systems. A couple of years ago, Department of Energy started providing us with funds to buy subscriptions to different commercial vendors, [including] IBM, Rigetti and Quantinuum. Those are the three we have in the program right now. We use the OLCF proposal review system to basically recruit users’ projects onto these devices. The big requirement there is you have to publish your results. We’ve got probably close to 40 publications this year alone, more than 70 for the life of the program. What we’re really doing, though, is monitoring the progress that we’re making an on these systems. Are the problems they’re solving getting bigger? Are they getting better results? These types of things. They normally get six months of access, and a chance to renew it for six more. At the moment, we’ve got over 200 users in the program, mostly from the DOE labs, but also from universities and a few from industry.

HPCwire: The systems you mentioned are either superconducting or trapped ion. Have you thought about offering photonics or others types of systems?

Humble: We had Xanadu (photonics) for a while but we ended up not renewing that contract. Of course, we’re keeping our eyes open for all the technologies that are out there. I personally don’t have a strong opinion about which technologies are in front; I think they’re all still being evaluated.

HPCwire: Thanks very much for your time.


Travis Humble Bio
Travis Humble is director of the Quantum Science Center, a Distinguished Scientist at Oak Ridge National Laboratory, and director of the lab’s Quantum Computing Institute. Travis is leading the development of new quantum technologies and infrastructure to impact the DOE mission of scientific discovery through quantum computing. As director of the QSC, Travis leads the innovation of scalable, resilient quantum information technologies through new materials, devices, and algorithms and facilitates the transfer of quantum technologies to the broadest audience.

In addition, Travis serves as director of the OLCF Quantum Computing User Program by leading the management and operation of quantum computing technologies for a broad base of users. These revolutionary new approaches to familiar computational problems help reduce algorithmic complexity, reduce computational resource requirements like power and communication, and increase the scale at which state-of-the-art scientific applications perform. In this role, Travis leads the design, development, and benchmarking of quantum computing platforms.

Travis is editor-in-chief for ACM Transactions on Quantum Computing, Associate Editor for Quantum Information Processing, and co-chair of the IEEE Quantum Initiative. Travis also holds a joint faculty appointment with the University of Tennessee Bredesen Center for Interdisciplinary Research and Graduate Education working with students on energy-efficient computing solutions. Travis received a doctorate in theoretical chemistry from the University of Oregon before joining ORNL in 2005.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputing Helps Explain the Milky Way’s Shape

September 30, 2022

If you look at the Milky Way from “above,” it almost looks like a cat’s eye: a circle of spiral arms with an oval “iris” in the middle. That iris — a starry bar that connects the spiral arms — has two stran Read more…

Top Supercomputers to Shake Up Earthquake Modeling

September 29, 2022

Two DOE-funded projects — and a bunch of top supercomputers — are converging to improve our understanding of earthquakes and enable the construction of more earthquake-resilient buildings and infrastructure. The firs Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's annual developer gala last held in 2016. The chipmaker cut t Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely mimic the behavior of the human brain through the use of co Read more…

DOE Announces $42M ‘COOLERCHIPS’ Datacenter Cooling Program

September 28, 2022

With massive machines like Frontier guzzling tens of megawatts of power to operate, datacenters’ energy use is of increasing concern for supercomputer operations – and particularly for the U.S. Department of Energy ( Read more…

AWS Solution Channel

Shutterstock 1818499862

Rearchitecting AWS Batch managed services to leverage AWS Fargate

AWS service teams continuously improve the underlying infrastructure and operations of managed services, and AWS Batch is no exception. The AWS Batch team recently moved most of their job scheduler fleet to a serverless infrastructure model leveraging AWS Fargate. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1166887495

Improving Insurance Fraud Detection using AI Running on Cloud-based GPU-Accelerated Systems

Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…

Do You Believe in Science? Take the HPC Covid Safety Pledge

September 28, 2022

ISC 2022 was back in person, and the celebration was on. Frontier had been named the first exascale supercomputer on the Top500 list, and workshops, poster sessions, paper presentations, receptions, and booth meetings we Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's ann Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely Read more…

HPE to Build 100+ Petaflops Shaheen III Supercomputer

September 27, 2022

The King Abdullah University of Science and Technology (KAUST) in Saudi Arabia has announced that HPE has won the bid to build the Shaheen III supercomputer. Sh Read more…

Intel’s New Programmable Chips Next Year to Replace Aging Products

September 27, 2022

Intel shared its latest roadmap of programmable chips, and doesn't want to dig itself into a hole by following AMD's strategy in the area.  "We're thankfully not matching their strategy," said Shannon Poulin, corporate vice president for the datacenter and AI group at Intel, in response to a question posed by HPCwire during a press briefing. The updated roadmap pieces together Intel's strategy for FPGAs... Read more…

Intel Ships Sapphire Rapids – to Its Cloud

September 27, 2022

Intel has had trouble getting its chips in the hands of customers on time, but is providing the next best thing – to try out those chips in the cloud. Delayed chips such as Sapphire Rapids server processors and Habana Gaudi 2 AI chip will be available on a platform called the Intel Developer Cloud, which was announced at the Intel Innovation event being held in San Jose, California. Read more…

More Details on ‘Half-Exaflop’ Horizon System, LCCF Emerge

September 26, 2022

Since 2017, plans for the Leadership-Class Computing Facility (LCCF) have been underway. Slated for full operation somewhere around 2026, the LCCF’s scope ext Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

Leading Solution Providers


AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

September 8, 2022

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow