Glimpse into ORNL Quantum Science Center Efforts to Find the Elusive Majorana and Much More

By John Russell

August 16, 2022

The Quantum Science Center (QSC), headquartered at Oak Ridge National Laboratory, is one of five such centers created by the National Quantum Initiative Act in 2018 and run by the Department of Energy. They all have distinct and overlapping goals. That’s sort of the point, to bring both focus and cooperation, and a heavy dose of industry participation to advance quantum information sciences broadly and quantum computing directly in the U.S.

All of the centers have ambitious goals, but perhaps none is more ambitious than QSC’s – to help deliver topological quantum computing. This approach depends on an as-yet unproven particle, Marjorana, one of a class of mysterious non-abelian anyons that follow non-abelian statistics. We won’t dig deeply into that beyond saying such a quantum computer should be extremely resistant to error – a very good thing – and may also be able to directly map physical problems onto quantum computers. The answers would have quantum laws baked in and be extremely accurate. Remember digital twins. Think quantum twins instead, with formidable predictive power.

Travis Humble, ORNL QSC. Credit: Carlos Jones, ORNL

The race for topological quantum computing is a bit of a gamble. There are skeptics. Microsoft has been the biggest champion of the topological approach and is a close QSC collaborator. Interestingly, in its effort to flesh out topological quantum computing, QSC is leveraging existing NISQ systems. Newly named director of QSC Travis Humble calls this, “using today’s quantum computers to try and build tomorrow’s quantum computers.”

Don’t be misled. There is a great deal more than chasing non-abelian particles going on at QSC which is digging into materials science, algorithm development, and sensors, although much of what is being done in these areas is intended to support development of topological computers (see QSC’s science thrusts below excerpted from the QSC website.)

Thrust 1: Quantum Materials Discovery and Development

Thrust 1 demonstrates and controls non-abelian anyon states relevant to Quantum Information Science (QIS) in real materials. These states are expected to exist in electronic materials with nontrivial topologies and magnetic systems with entangled quantum spins, and the topological protection and delocalization of the states that make them attractive for QIS applications can also make them difficult to probe and to understand. Thus, research in this thrust is focused on understanding and developing topological electronic materials, quantum spin systems, and quantum probes. Led by ORNL’s Michael McGuire

Thrust 2: Quantum Algorithms and Simulation

Thrust 2 achieves predictive capabilities for the study of strongly coupled quantum systems, including topological systems and quantum field theories, and develops and tests quantum algorithms for quantum limited sensors. QSC researchers are developing efficient, scalable, and robust quantum simulation and metrology algorithms, testing these algorithms in predictive dynamical quantum simulation and quantum sensing applications, and developing software tools to support algorithm analysis, optimization, and implementation. Led by LAN’s Andrew Sornborger

Thrust 3: Quantum Devices and Sensors for Discovery Science

Thrust 3 develops an understanding of fundamental sensing mechanisms in high-performance quantum devices and sensors. This understanding allows QSC researchers, working across the Center, to co-design new quantum devices and sensors with improved energy resolution, lower energy detection thresholds, better spatial and temporal resolution, lower noise, and lower error rates. Going beyond proof-of-principle demonstrations, the focus is on implementation of this hardware in specific, real-world applications. Led by Fermilab’s Aaron Chou

Humble emphasizes that QSC is off to a fast start, and like all of DOE’s QIS centers, focused on concrete deliverables. “We’ve (QSC) had a really good publication record. I think we’re up to like 111 peer reviewed publications as of last month. But in addition, we’re focusing a lot on invention disclosures and software copyrights because we see those as ways to get these ideas out in industry a little bit faster. It’s great to publish papers, but really the role of the centers is to act as engines of innovation within the big QIS ecosystem. So just publishing papers isn’t good enough, to be honest, we actually have to transition the technology.”

It is perhaps noteworthy that the QIS centers seem to be trying to carve out identities beyond the labs at which they are headquartered. Humble said, “You’re exactly right. There’s so much interest in this topic at the moment that anyone who has an institution is ill prepared to be able to take it all on. So for example, at Oak Ridge, we’re the lead for Quantum Science Center, but there are 17 partners overall, that are contributing to it, and honestly, if we took any one of them away, we’d end up with a gap in our capabilities.”

HPCwire recently talked with Humble about QSC’s expansive plans. The center has roughly 258 users – “Really I should call them members of the center. That includes everyone, our advisory boards, our students our staff or postdocs. I think the 250 number is probably stable right now.” – and its budget is fixed at about $25 million per year “so we’re not really growing the research portfolio,” said Humble.

Presented here is a portion of that wide-ranging conversation.

HPCwire: Maybe you could start by giving an overview. For example, the focus on topological quantum computing seems like a more distant goal compared to other centers. Also, maybe you could talk a little bit about key early term deliverables.

Humble: As you mentioned, the National Quantum Initiative act of 2018 directed Department of Energy to establish these NQIS research centers. They ended up selecting five, Quantum Science Center was one of them, headquartered at Oak Ridge. The other four were also headquartered at national laboratories, which is maybe not surprising in hindsight. I would say what makes the QSC distinct is we are focused on this question of how can we leverage material science to build better quantum computers, and the non-abelian anyons are a fundamental type of particle and we are trying to be the first to demonstrate that not only can we create those types of particles, but then use them for quantum computation. That’s a long-term goal.

I think for the first five years, we are probably going to get to the point of discovery of the material that can host these particles that; we’ve already got some really good candidates out there. It will take more time though to transform that into an actual quantum computer.

Overall, we have three topical research focuses, what we call thrust areas. These include material science; computational science – and really this is quantum computing algorithms and applications – and sensors. The efforts are all interrelated. Think about it this way. There’s this idea that I’ve got to first create these new types of quantum materials. But in order to confirm those quantum materials, I wouldn’t need new types of quantum sensors. But in order to create those quantum sensors, I’m going to need these new types of quantum algorithms. So in the end, it’s a big cycle of productivity. That’s the crux of the center, to keep that cycle of interaction going.

HPCwire: The focus on non-abelian anyon systems seems a more long-term goal. Other centers are working on better known and understood approaches such as trapped ions, superconducting, cold atoms. Does that mean that that the QSC is maybe the most future-looking of the of the centers? It doesn’t seem like there’s a near-term payoff here.

Humble: So we also are looking at the superconducting electronics and trapped ions, photonics, those are actually part of our thrust area on quantum computing, where we’re trying to use today’s quantum computers to gain insights into what are the materials and properties that we’re going to need to scale these things up. As you said, today’s technologies, superconducting, trapped ions, etc., they’re very good for proof of principle demonstrations. But I think everyone’s a little worried about how you could build a full-scale system. It gets very complex to manage all the resources that would be required to make that basically, production level. That’s where the new types of materials come in; they could actually reduce the complexity requirements of building these future systems. But it hinges on finding those non-abelian anyons.

So in this sense, I do agree that we are using today’s quantum computers to try and build tomorrow’s quantum computers. But the output along the way is that we are writing programs for superconducting devices, trapped ion devices, and we’re getting good results from that. And we’re building these sensors, which can actually be used today for detecting new types of materials. Even dark matter, we have a partner at Fermilab that is actually focused on looking for candidates for this type of dark matter that may be out there. All of that ends up being some of the output that we’re generating in the near term.

The idea of partnering with Microsoft as part of our center to build that future quantum computer, that’s probably on the 10-year timescale. I don’t think that we’re going to build a topological quantum computer in the next three years. Of course not. But seven, eight years from now, we may actually have some working prototypes. So it’s future oriented, yes, but maybe kind of intermediate-future.

HPCwire: What fundamental advantage does a topological quantum computer based on these non-abelian particles have over the other technologies?

Humble: Probably the leading source of error at the moment in the existing technologies is fluctuations in the control signals that are actually being used for these operations. And it turns out, that’s where the topological model is more resilient. It is less sensitive to those local control fluctuations. So I do think, to the first order, it would help get around that engineering challenge. Of course, you have to worry about what’s next. Right? So once you once you tamp that down, are you going to pick up another problem that’s even harder – that we don’t know yet. The theory says, yes, you’ll be able to build these types of quantum computers at larger sizes and operate them more efficiently if you have this type of material. But there’s a big question, you know, which is how that really turns out in practice. I think we won’t understand that until we build some of these prototypes and start getting feedback.

HPCwire: Will such a system be able to use the quantum ecosystem (middleware, programming tools, etc.) that’s rapidly growing now?

Humble:  I actually think that’s essential. My personal opinion is there will not be one technology that we build quantum computers from, in the same way that conventional computing has [been] functionalized, into memory, and compute, and bandwidth and all these things. We’re going to need the same thing on the quantum side. At the moment, we’re all focused on that one technology piece (compute) because we have to develop it. But once you get to a full system that’s really productive, it’s probably going to be a mixture of technologies, which means that your higher-level control systems and architecture have to be agnostic to the individual hardware.

I think of it in terms of Oak Ridge and HPC. You know, we invest so much in building up the code base and the tools for all this – if we had to rewrite that every time we change the architecture, that would be a failing proposition. I think the same is going to be true for quantum. Even if we can create a new type of quantum computer, it’s going to have to be backwards compatible with some of the [tools and systems].

HPCwire: Is it your sense that quantum computing will end up being just a piece of the computing writ large puzzle in the sense that most applications will be parsed into classical portions and quantum portions and be run on hybrid systems?

Humble: Yeah, so that’s a big question right now. There’s this hybrid model, right, where you’ve got the classical workload, and maybe you’re offloading it or some of it to a quantum accelerator type of thing. I really like that. Oak Ridge really likes that idea. But there is an alternative, which is that I use the quantum computer as a standalone device, almost like a special purpose machine that only solves chemistry problems, or only solves physics problems. It almost becomes a proxy for an experiment itself. That’s a very different model, not one that we use conventional computers for because they don’t have the same physics as the experiment, right? At the moment, we’re kind of looking at both, you know, trying to figure out the advantages for each approach.

HPCwire: You’re just talking about taking advantage of the probabilistic nature of quantum computing to more closely mimic the problem? This the Richard Feynman idea – simulating on a quantum system that essentially acts the same way as the physical world?

Inside ORNL’s Spallation Neutron Source facility (credit: HPCwire)

Humble: That’s it. [For example] we have the Spallation Neutron Source here at Oak Ridge. We would normally synthesize the material, take it up to the SNS, put it in there, they would characterize it, and we get out a neutron spectrum. What if I could just program in the material whose spectrum I wanted to see onto a quantum computer? I would have almost a quantum twin of what the SNS facility does, what it outputs to me. It should be a fairly accurate representation of what I should expect. That’s different at some level than just running it on an HPC system.

HPCwire: You hear people, like Nvidia, say, “Well, look, you’ll never be able to do matrix multiplies as efficient as you can on a GPU.” And maybe that’s true, but it might not make any difference if you’re just doing the direct simulation that essentially mimics the material.

Humble: We’ve gotten some early examples where we have used either the D-Wave quantum computer or the IBM quantum computer to simulate model materials and actually get out results that we can match to experiment. The thing is, though, those models are so small right now that it’s really not anything to surpass where we are today. But it’s headed in that direction. So that’s really got me excited.

HPCwire: Let’s shift gears slightly and talk about QSC’s goals and near-term deliverable for the three focus areas: materials, algorithms and sensors.

Humble: In the materials area, we want to demonstrate that topological non-abelian anyons are present in the materials that we’re creating. There’s a particular type of measurement there called braiding that no one has demonstrated yet in these materials. So we want to be the first to do that. In the sensing area, we actually want to be able to build sensors, arrays of sensors, that can detect quantum states. This would be precisely the types of sensors that we need to look for the anyons and other types of quantum material characterization. It would also be a handoff point to the high-energy physics community for the dark matter search. So in terms of a five-year goal, it’s really about demonstrating a new capability for quantum sensing, based on these multi-array sensors.

In the computing area, we’ve actually picked a couple of different scientific domains – materials chemistry, nuclear physics, and high energy physics – where we are attempting to show that you can use today’s quantum computers to solve some of those scientific problems, not necessarily surpassing state of the art. That gets into this whole quantum advantage question, that can be a stretch goal for us right now to demonstrate quantum advantage. But just to show the broad feasibility of using quantum computing, specifically quantum simulation types of calculation, where you’re doing what we were saying earlier, mimicking the quantum system, doing that across all these areas, to validate that this is a good path forward.

HPCwire: Isn’t this similar to what D-Wave does for optimization problems? It’s not a physical representation, exactly, of the system, but it is similar.

D-Wave Advantage System

Humble: It depends on the problem you choose. Optimization is such a big field already, whether you’re thinking about like operation logistics or even recommendation systems. When you map those problems onto the quantum platforms, there’s this encoding, a translations step. In that case, it really is a fundamentally different representation of the problem that you’re solving but you’re hoping that the solution ends up mapping back to the problem you originally tried.

In these scientific areas, though, we are actually starting off with quantum mechanical models that translate directly into the physical systems that we’re using. And that’s honestly why we chose them, because we think those are going to have the lowest overhead for implementation, and therefore the best chance of demonstrating some type of advantage.

HPCwire: Even with those problems in which you can directly map them to the system, I’m guessing you’ll still get a distribution of results. You have to run the problem, not once, but many times to find some distribution of answers, and pick from among those. But with a non-abelian based system, the distribution will be more reflective of what’s actually happening in nature as opposed to also reflecting noise in the system? Do I have that right?

Humble: Exactly right. The other unique wrinkle here is that sometimes the solution is actually a distribution. So it may not just be a single value you’re looking for, it could actually have produced a distributed probability. Quantum is perfect for solving those types of problems. What ends up happening is you sample from your device to get a representation of it. Then the question becomes, how many samples do you take and how good was that representation? It all ends up looking a lot like probabilistic computing at some level.

HPCwire: You’ve said the center also looks at other technologies like superconducting or ion trap or these things. What does that entail?

IBM Eagle Quantum QPU

Humble: A good example is IBM, one of our partners within the center. We actually work with them on these application areas, trying to map materials problems onto their devices, the IBM devices, and they’re providing us feedback on what is the best way to do the mapping and mitigate against the noise. The output is we get a really nice publication with them demonstrating the feasibility of solving science problems on today’s devices. IBM is invested in that because they want to understand the capabilities of their system. We’re happy with the partnership because they’re giving us access to some of their best systems for this purpose. We’re doing the same thing with a company called ColdQuanta. They’re a small startup that actually is getting a lot bigger. They’re using a different technology based on cold atoms that get trapped in electromagnetic fields. But it’s the same idea. We’re working with them on how we can provide feedback to make that system better. In this case, actually, for quantum sensing algorithms.

HPCwire: Is QSC’s near-term output primarily peer-reviewed papers that sort of demonstrate what you’ve accomplished so far?

Humble: We’ve had a really good publication record today, I think we’re up to like 111 peer reviewed publications as of last month. But in addition, we’re focusing a lot on invention disclosures and software copyrights because we see those as ways to get these ideas out in industry a little bit faster. It’s great to publish papers, but really the role of the centers is to act as engines of innovation within this big QIS ecosystem.

So just publishing papers isn’t good enough, to be honest, we actually have to transition the technology. With Microsoft, we’re actually taking on some of the higher risk materials. And once they get vetted by us, we’ve got mechanisms in place to hand them off so that Microsoft can look into developing them further. With some of our university partners, Purdue is a big partner here, they’re actually training the students on the materials and devices were making, and then they’re going off into industry and picking up positions where they can now, you know, transfer this these ideas.

HPCwire: One of the challenges for external observers is figuring out how the various quantum research programs fit together. Even just at ORNL. I’m thinking of Rafael Pooser who for a while was working on a DOE quantum testbed and Nick Peters, who’s working on quantum networking.

Humble: Within Oak Ridge, we actually have multiple programs funded by Department of Energy and elsewhere to support development of quantum. QSC is one of those. I think of QSC as the focal point for transitioning basic science into the more applied areas. So whether it’s partnership with industry or workforce development, QSC is our focal point there. But we [ORNL] also have our QIS section. This is the one that Nick Peters leads focused on computing, sensing, networking, and core capabilities in that area. We also have a quantum materials program funded directly from Department of Energy. And we have a quantum computing user program that runs out of the OLCF (Oak Ridge Leadership Computing Facility) which is providing access to commercial quantum computing systems.

Then we have all the other user facilities such as the SNS and CNMS (Center for Nanophase Material Sciences) and these nanoscale facilities. So internally, it’s almost like a roundtable of these different stakeholders in this field of quantum. The goal, though, is to keep up with DOE’s priorities in this area. Even before the National Quantum Initiative Act, a letter was sent out emphasizing QIS as a priority across the DOE Office of Science. QSC is kind of a focal point for all of this, but by far, not the only piece.

HPCwire: How do the groups interact? Is it just ad hoc?

Humble: No, we’re much better coordinated. If you think of it as like a Venn diagram, the QIS section and the QSC are actually overlapping each other substantially. But there are parts of the Oakridge quantum research portfolio that are not within QSC, networking, for example. So QSC doesn’t have within its thrust areas, any networking activity. Nick is pursuing quantum networking through a second, a separate DOE program in that area. Now, of course, we’re trying to figure out how do they leverage each other? How can you build up the quantum networks, quantum facilities, and then the quantum science center? But it’s really much more tightly coordinated, because it’s all the same people, all the same laboratories?

HPCwire: Are you still able to do research?

Humble: Yes. I’ve got a couple programs, all of them focused on quantum computing. Some of them are focused on software for quantum computing – so building the compilers and languages that can do this hardware-agnostic programming we’re talking about. Then the other on developing applications in the chemistry space. How do I do quantum chemistry using quantum computers is a really interesting problem because the chemistry community is in some ways perfect to take on quantum computing; they already understand that mathematics and the problem space is well set up. But getting those problems onto these computers is difficult because of the technology limits, and so we spent a lot of time looking for the best ways to program things.

HPCwire: You mentioned a user program separate from QSC. What’s that all about?

Humble: I also manage the quantum computing user program, which is this part of OLCF, that’s providing access to commercial systems. A couple of years ago, Department of Energy started providing us with funds to buy subscriptions to different commercial vendors, [including] IBM, Rigetti and Quantinuum. Those are the three we have in the program right now. We use the OLCF proposal review system to basically recruit users’ projects onto these devices. The big requirement there is you have to publish your results. We’ve got probably close to 40 publications this year alone, more than 70 for the life of the program. What we’re really doing, though, is monitoring the progress that we’re making an on these systems. Are the problems they’re solving getting bigger? Are they getting better results? These types of things. They normally get six months of access, and a chance to renew it for six more. At the moment, we’ve got over 200 users in the program, mostly from the DOE labs, but also from universities and a few from industry.

HPCwire: The systems you mentioned are either superconducting or trapped ion. Have you thought about offering photonics or others types of systems?

Humble: We had Xanadu (photonics) for a while but we ended up not renewing that contract. Of course, we’re keeping our eyes open for all the technologies that are out there. I personally don’t have a strong opinion about which technologies are in front; I think they’re all still being evaluated.

HPCwire: Thanks very much for your time.


Travis Humble Bio
Travis Humble is director of the Quantum Science Center, a Distinguished Scientist at Oak Ridge National Laboratory, and director of the lab’s Quantum Computing Institute. Travis is leading the development of new quantum technologies and infrastructure to impact the DOE mission of scientific discovery through quantum computing. As director of the QSC, Travis leads the innovation of scalable, resilient quantum information technologies through new materials, devices, and algorithms and facilitates the transfer of quantum technologies to the broadest audience.

In addition, Travis serves as director of the OLCF Quantum Computing User Program by leading the management and operation of quantum computing technologies for a broad base of users. These revolutionary new approaches to familiar computational problems help reduce algorithmic complexity, reduce computational resource requirements like power and communication, and increase the scale at which state-of-the-art scientific applications perform. In this role, Travis leads the design, development, and benchmarking of quantum computing platforms.

Travis is editor-in-chief for ACM Transactions on Quantum Computing, Associate Editor for Quantum Information Processing, and co-chair of the IEEE Quantum Initiative. Travis also holds a joint faculty appointment with the University of Tennessee Bredesen Center for Interdisciplinary Research and Graduate Education working with students on energy-efficient computing solutions. Travis received a doctorate in theoretical chemistry from the University of Oregon before joining ORNL in 2005.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

What We Know about Alice Recoque, Europe’s Second Exascale System

June 24, 2024

Europe officially announced its second exascale system, Alice Recoque, and you can expect to see that name on the Top500 supercomputer list in a few years. Alice Recoque is the new name for a supercomputer with the opera Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – or collections of software components that work together to Read more…

HPE and NVIDIA Join Forces and Plan Conquest of Enterprise AI Frontier

June 20, 2024

The HPE Discover 2024 conference is currently in full swing, and the keynote address from Hewlett-Packard Enterprise (HPE) CEO Antonio Neri on Tuesday, June 18, was an unforgettable event. Other than being the first busi Read more…

Slide Shows Samsung May be Developing a RISC-V CPU for In-memory AI Chip

June 19, 2024

Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to release an AI accelerator with heavy in-memory processing, b Read more…

ASC24 Student Cluster Competition: Who Won and Why?

June 18, 2024

As is our tradition, we’re going to take a detailed look back at the recently concluded the ASC24 Student Cluster Competition (Asia Supercomputer Community) to see not only who won the various awards, but to figure out Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch that D-Wave’s brand of analog quantum computing (quantum Read more…

Spelunking the HPC and AI GPU Software Stacks

June 21, 2024

As AI continues to reach into every domain of life, the question remains as to what kind of software these tools will run on. The choice in software stacks – Read more…

HPE and NVIDIA Join Forces and Plan Conquest of Enterprise AI Frontier

June 20, 2024

The HPE Discover 2024 conference is currently in full swing, and the keynote address from Hewlett-Packard Enterprise (HPE) CEO Antonio Neri on Tuesday, June 18, Read more…

Slide Shows Samsung May be Developing a RISC-V CPU for In-memory AI Chip

June 19, 2024

Samsung may have unintentionally revealed its intent to develop a RISC-V CPU, which a presentation slide showed may be used in an AI chip. The company plans to Read more…

Qubits 2024: D-Wave’s Steady March to Quantum Success

June 18, 2024

In his opening keynote at D-Wave’s annual Qubits 2024 user meeting, being held in Boston, yesterday and today, CEO Alan Baratz again made the compelling pitch Read more…


Argonne’s Rick Stevens on Energy, AI, and a New Kind of Science

June 17, 2024

The world is currently experiencing two of the largest societal upheavals since the beginning of the Industrial Revolution. One is the rapid improvement and imp Read more…

Under The Wire: Nearly HPC News (June 13, 2024)

June 13, 2024

As managing editor of the major global HPC news source, the term "news fire hose" is often mentioned. The analogy is quite correct. In any given week, there are Read more…

Labs Keep Supercomputers Alive for Ten Years as Vendors Pull Support Early

June 12, 2024

Laboratories are running supercomputers for much longer, beyond the typical lifespan, as vendors prematurely deprecate the hardware and stop providing support. Read more…

MLPerf Training 4.0 – Nvidia Still King; Power and LLM Fine Tuning Added

June 12, 2024

There are really two stories packaged in the most recent MLPerf  Training 4.0 results, released today. The first, of course, is the results. Nvidia (currently Read more…

Atos Outlines Plans to Get Acquired, and a Path Forward

May 21, 2024

Atos – via its subsidiary Eviden – is the second major supercomputer maker outside of HPE, while others have largely dropped out. The lack of integrators and Atos' financial turmoil have the HPC market worried. If Atos goes under, HPE will be the only major option for building large-scale systems. Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Everyone Except Nvidia Forms Ultra Accelerator Link (UALink) Consortium

May 30, 2024

Consider the GPU. An island of SIMD greatness that makes light work of matrix math. Originally designed to rapidly paint dots on a computer monitor, it was then Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

Leading Solution Providers


Nvidia Shipped 3.76 Million Data-center GPUs in 2023, According to Study

June 10, 2024

Nvidia had an explosive 2023 in data-center GPU shipments, which totaled roughly 3.76 million units, according to a study conducted by semiconductor analyst fir Read more…


How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Intel’s Next-gen Falcon Shores Coming Out in Late 2025 

April 30, 2024

It's a long wait for customers hanging on for Intel's next-generation GPU, Falcon Shores, which will be released in late 2025.  "Then we have a rich, a very Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

AMD Clears Up Messy GPU Roadmap, Upgrades Chips Annually

June 3, 2024

In the world of AI, there's a desperate search for an alternative to Nvidia's GPUs, and AMD is stepping up to the plate. AMD detailed its updated GPU roadmap, w Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

  • arrow
  • Click Here for More Headlines
  • arrow