Glimpse into ORNL Quantum Science Center Efforts to Find the Elusive Majorana and Much More

By John Russell

August 16, 2022

The Quantum Science Center (QSC), headquartered at Oak Ridge National Laboratory, is one of five such centers created by the National Quantum Initiative Act in 2018 and run by the Department of Energy. They all have distinct and overlapping goals. That’s sort of the point, to bring both focus and cooperation, and a heavy dose of industry participation to advance quantum information sciences broadly and quantum computing directly in the U.S.

All of the centers have ambitious goals, but perhaps none is more ambitious than QSC’s – to help deliver topological quantum computing. This approach depends on an as-yet unproven particle, Marjorana, one of a class of mysterious non-abelian anyons that follow non-abelian statistics. We won’t dig deeply into that beyond saying such a quantum computer should be extremely resistant to error – a very good thing – and may also be able to directly map physical problems onto quantum computers. The answers would have quantum laws baked in and be extremely accurate. Remember digital twins. Think quantum twins instead, with formidable predictive power.

Travis Humble, ORNL QSC. Credit: Carlos Jones, ORNL

The race for topological quantum computing is a bit of a gamble. There are skeptics. Microsoft has been the biggest champion of the topological approach and is a close QSC collaborator. Interestingly, in its effort to flesh out topological quantum computing, QSC is leveraging existing NISQ systems. Newly named director of QSC Travis Humble calls this, “using today’s quantum computers to try and build tomorrow’s quantum computers.”

Don’t be misled. There is a great deal more than chasing non-abelian particles going on at QSC which is digging into materials science, algorithm development, and sensors, although much of what is being done in these areas is intended to support development of topological computers (see QSC’s science thrusts below excerpted from the QSC website.)

Thrust 1: Quantum Materials Discovery and Development

Thrust 1 demonstrates and controls non-abelian anyon states relevant to Quantum Information Science (QIS) in real materials. These states are expected to exist in electronic materials with nontrivial topologies and magnetic systems with entangled quantum spins, and the topological protection and delocalization of the states that make them attractive for QIS applications can also make them difficult to probe and to understand. Thus, research in this thrust is focused on understanding and developing topological electronic materials, quantum spin systems, and quantum probes. Led by ORNL’s Michael McGuire

Thrust 2: Quantum Algorithms and Simulation

Thrust 2 achieves predictive capabilities for the study of strongly coupled quantum systems, including topological systems and quantum field theories, and develops and tests quantum algorithms for quantum limited sensors. QSC researchers are developing efficient, scalable, and robust quantum simulation and metrology algorithms, testing these algorithms in predictive dynamical quantum simulation and quantum sensing applications, and developing software tools to support algorithm analysis, optimization, and implementation. Led by LAN’s Andrew Sornborger

Thrust 3: Quantum Devices and Sensors for Discovery Science

Thrust 3 develops an understanding of fundamental sensing mechanisms in high-performance quantum devices and sensors. This understanding allows QSC researchers, working across the Center, to co-design new quantum devices and sensors with improved energy resolution, lower energy detection thresholds, better spatial and temporal resolution, lower noise, and lower error rates. Going beyond proof-of-principle demonstrations, the focus is on implementation of this hardware in specific, real-world applications. Led by Fermilab’s Aaron Chou

Humble emphasizes that QSC is off to a fast start, and like all of DOE’s QIS centers, focused on concrete deliverables. “We’ve (QSC) had a really good publication record. I think we’re up to like 111 peer reviewed publications as of last month. But in addition, we’re focusing a lot on invention disclosures and software copyrights because we see those as ways to get these ideas out in industry a little bit faster. It’s great to publish papers, but really the role of the centers is to act as engines of innovation within the big QIS ecosystem. So just publishing papers isn’t good enough, to be honest, we actually have to transition the technology.”

It is perhaps noteworthy that the QIS centers seem to be trying to carve out identities beyond the labs at which they are headquartered. Humble said, “You’re exactly right. There’s so much interest in this topic at the moment that anyone who has an institution is ill prepared to be able to take it all on. So for example, at Oak Ridge, we’re the lead for Quantum Science Center, but there are 17 partners overall, that are contributing to it, and honestly, if we took any one of them away, we’d end up with a gap in our capabilities.”

HPCwire recently talked with Humble about QSC’s expansive plans. The center has roughly 258 users – “Really I should call them members of the center. That includes everyone, our advisory boards, our students our staff or postdocs. I think the 250 number is probably stable right now.” – and its budget is fixed at about $25 million per year “so we’re not really growing the research portfolio,” said Humble.

Presented here is a portion of that wide-ranging conversation.

HPCwire: Maybe you could start by giving an overview. For example, the focus on topological quantum computing seems like a more distant goal compared to other centers. Also, maybe you could talk a little bit about key early term deliverables.

Humble: As you mentioned, the National Quantum Initiative act of 2018 directed Department of Energy to establish these NQIS research centers. They ended up selecting five, Quantum Science Center was one of them, headquartered at Oak Ridge. The other four were also headquartered at national laboratories, which is maybe not surprising in hindsight. I would say what makes the QSC distinct is we are focused on this question of how can we leverage material science to build better quantum computers, and the non-abelian anyons are a fundamental type of particle and we are trying to be the first to demonstrate that not only can we create those types of particles, but then use them for quantum computation. That’s a long-term goal.

I think for the first five years, we are probably going to get to the point of discovery of the material that can host these particles that; we’ve already got some really good candidates out there. It will take more time though to transform that into an actual quantum computer.

Overall, we have three topical research focuses, what we call thrust areas. These include material science; computational science – and really this is quantum computing algorithms and applications – and sensors. The efforts are all interrelated. Think about it this way. There’s this idea that I’ve got to first create these new types of quantum materials. But in order to confirm those quantum materials, I wouldn’t need new types of quantum sensors. But in order to create those quantum sensors, I’m going to need these new types of quantum algorithms. So in the end, it’s a big cycle of productivity. That’s the crux of the center, to keep that cycle of interaction going.

HPCwire: The focus on non-abelian anyon systems seems a more long-term goal. Other centers are working on better known and understood approaches such as trapped ions, superconducting, cold atoms. Does that mean that that the QSC is maybe the most future-looking of the of the centers? It doesn’t seem like there’s a near-term payoff here.

Humble: So we also are looking at the superconducting electronics and trapped ions, photonics, those are actually part of our thrust area on quantum computing, where we’re trying to use today’s quantum computers to gain insights into what are the materials and properties that we’re going to need to scale these things up. As you said, today’s technologies, superconducting, trapped ions, etc., they’re very good for proof of principle demonstrations. But I think everyone’s a little worried about how you could build a full-scale system. It gets very complex to manage all the resources that would be required to make that basically, production level. That’s where the new types of materials come in; they could actually reduce the complexity requirements of building these future systems. But it hinges on finding those non-abelian anyons.

So in this sense, I do agree that we are using today’s quantum computers to try and build tomorrow’s quantum computers. But the output along the way is that we are writing programs for superconducting devices, trapped ion devices, and we’re getting good results from that. And we’re building these sensors, which can actually be used today for detecting new types of materials. Even dark matter, we have a partner at Fermilab that is actually focused on looking for candidates for this type of dark matter that may be out there. All of that ends up being some of the output that we’re generating in the near term.

The idea of partnering with Microsoft as part of our center to build that future quantum computer, that’s probably on the 10-year timescale. I don’t think that we’re going to build a topological quantum computer in the next three years. Of course not. But seven, eight years from now, we may actually have some working prototypes. So it’s future oriented, yes, but maybe kind of intermediate-future.

HPCwire: What fundamental advantage does a topological quantum computer based on these non-abelian particles have over the other technologies?

Humble: Probably the leading source of error at the moment in the existing technologies is fluctuations in the control signals that are actually being used for these operations. And it turns out, that’s where the topological model is more resilient. It is less sensitive to those local control fluctuations. So I do think, to the first order, it would help get around that engineering challenge. Of course, you have to worry about what’s next. Right? So once you once you tamp that down, are you going to pick up another problem that’s even harder – that we don’t know yet. The theory says, yes, you’ll be able to build these types of quantum computers at larger sizes and operate them more efficiently if you have this type of material. But there’s a big question, you know, which is how that really turns out in practice. I think we won’t understand that until we build some of these prototypes and start getting feedback.

HPCwire: Will such a system be able to use the quantum ecosystem (middleware, programming tools, etc.) that’s rapidly growing now?

Humble:  I actually think that’s essential. My personal opinion is there will not be one technology that we build quantum computers from, in the same way that conventional computing has [been] functionalized, into memory, and compute, and bandwidth and all these things. We’re going to need the same thing on the quantum side. At the moment, we’re all focused on that one technology piece (compute) because we have to develop it. But once you get to a full system that’s really productive, it’s probably going to be a mixture of technologies, which means that your higher-level control systems and architecture have to be agnostic to the individual hardware.

I think of it in terms of Oak Ridge and HPC. You know, we invest so much in building up the code base and the tools for all this – if we had to rewrite that every time we change the architecture, that would be a failing proposition. I think the same is going to be true for quantum. Even if we can create a new type of quantum computer, it’s going to have to be backwards compatible with some of the [tools and systems].

HPCwire: Is it your sense that quantum computing will end up being just a piece of the computing writ large puzzle in the sense that most applications will be parsed into classical portions and quantum portions and be run on hybrid systems?

Humble: Yeah, so that’s a big question right now. There’s this hybrid model, right, where you’ve got the classical workload, and maybe you’re offloading it or some of it to a quantum accelerator type of thing. I really like that. Oak Ridge really likes that idea. But there is an alternative, which is that I use the quantum computer as a standalone device, almost like a special purpose machine that only solves chemistry problems, or only solves physics problems. It almost becomes a proxy for an experiment itself. That’s a very different model, not one that we use conventional computers for because they don’t have the same physics as the experiment, right? At the moment, we’re kind of looking at both, you know, trying to figure out the advantages for each approach.

HPCwire: You’re just talking about taking advantage of the probabilistic nature of quantum computing to more closely mimic the problem? This the Richard Feynman idea – simulating on a quantum system that essentially acts the same way as the physical world?

Inside ORNL’s Spallation Neutron Source facility (credit: HPCwire)

Humble: That’s it. [For example] we have the Spallation Neutron Source here at Oak Ridge. We would normally synthesize the material, take it up to the SNS, put it in there, they would characterize it, and we get out a neutron spectrum. What if I could just program in the material whose spectrum I wanted to see onto a quantum computer? I would have almost a quantum twin of what the SNS facility does, what it outputs to me. It should be a fairly accurate representation of what I should expect. That’s different at some level than just running it on an HPC system.

HPCwire: You hear people, like Nvidia, say, “Well, look, you’ll never be able to do matrix multiplies as efficient as you can on a GPU.” And maybe that’s true, but it might not make any difference if you’re just doing the direct simulation that essentially mimics the material.

Humble: We’ve gotten some early examples where we have used either the D-Wave quantum computer or the IBM quantum computer to simulate model materials and actually get out results that we can match to experiment. The thing is, though, those models are so small right now that it’s really not anything to surpass where we are today. But it’s headed in that direction. So that’s really got me excited.

HPCwire: Let’s shift gears slightly and talk about QSC’s goals and near-term deliverable for the three focus areas: materials, algorithms and sensors.

Humble: In the materials area, we want to demonstrate that topological non-abelian anyons are present in the materials that we’re creating. There’s a particular type of measurement there called braiding that no one has demonstrated yet in these materials. So we want to be the first to do that. In the sensing area, we actually want to be able to build sensors, arrays of sensors, that can detect quantum states. This would be precisely the types of sensors that we need to look for the anyons and other types of quantum material characterization. It would also be a handoff point to the high-energy physics community for the dark matter search. So in terms of a five-year goal, it’s really about demonstrating a new capability for quantum sensing, based on these multi-array sensors.

In the computing area, we’ve actually picked a couple of different scientific domains – materials chemistry, nuclear physics, and high energy physics – where we are attempting to show that you can use today’s quantum computers to solve some of those scientific problems, not necessarily surpassing state of the art. That gets into this whole quantum advantage question, that can be a stretch goal for us right now to demonstrate quantum advantage. But just to show the broad feasibility of using quantum computing, specifically quantum simulation types of calculation, where you’re doing what we were saying earlier, mimicking the quantum system, doing that across all these areas, to validate that this is a good path forward.

HPCwire: Isn’t this similar to what D-Wave does for optimization problems? It’s not a physical representation, exactly, of the system, but it is similar.

D-Wave Advantage System

Humble: It depends on the problem you choose. Optimization is such a big field already, whether you’re thinking about like operation logistics or even recommendation systems. When you map those problems onto the quantum platforms, there’s this encoding, a translations step. In that case, it really is a fundamentally different representation of the problem that you’re solving but you’re hoping that the solution ends up mapping back to the problem you originally tried.

In these scientific areas, though, we are actually starting off with quantum mechanical models that translate directly into the physical systems that we’re using. And that’s honestly why we chose them, because we think those are going to have the lowest overhead for implementation, and therefore the best chance of demonstrating some type of advantage.

HPCwire: Even with those problems in which you can directly map them to the system, I’m guessing you’ll still get a distribution of results. You have to run the problem, not once, but many times to find some distribution of answers, and pick from among those. But with a non-abelian based system, the distribution will be more reflective of what’s actually happening in nature as opposed to also reflecting noise in the system? Do I have that right?

Humble: Exactly right. The other unique wrinkle here is that sometimes the solution is actually a distribution. So it may not just be a single value you’re looking for, it could actually have produced a distributed probability. Quantum is perfect for solving those types of problems. What ends up happening is you sample from your device to get a representation of it. Then the question becomes, how many samples do you take and how good was that representation? It all ends up looking a lot like probabilistic computing at some level.

HPCwire: You’ve said the center also looks at other technologies like superconducting or ion trap or these things. What does that entail?

IBM Eagle Quantum QPU

Humble: A good example is IBM, one of our partners within the center. We actually work with them on these application areas, trying to map materials problems onto their devices, the IBM devices, and they’re providing us feedback on what is the best way to do the mapping and mitigate against the noise. The output is we get a really nice publication with them demonstrating the feasibility of solving science problems on today’s devices. IBM is invested in that because they want to understand the capabilities of their system. We’re happy with the partnership because they’re giving us access to some of their best systems for this purpose. We’re doing the same thing with a company called ColdQuanta. They’re a small startup that actually is getting a lot bigger. They’re using a different technology based on cold atoms that get trapped in electromagnetic fields. But it’s the same idea. We’re working with them on how we can provide feedback to make that system better. In this case, actually, for quantum sensing algorithms.

HPCwire: Is QSC’s near-term output primarily peer-reviewed papers that sort of demonstrate what you’ve accomplished so far?

Humble: We’ve had a really good publication record today, I think we’re up to like 111 peer reviewed publications as of last month. But in addition, we’re focusing a lot on invention disclosures and software copyrights because we see those as ways to get these ideas out in industry a little bit faster. It’s great to publish papers, but really the role of the centers is to act as engines of innovation within this big QIS ecosystem.

So just publishing papers isn’t good enough, to be honest, we actually have to transition the technology. With Microsoft, we’re actually taking on some of the higher risk materials. And once they get vetted by us, we’ve got mechanisms in place to hand them off so that Microsoft can look into developing them further. With some of our university partners, Purdue is a big partner here, they’re actually training the students on the materials and devices were making, and then they’re going off into industry and picking up positions where they can now, you know, transfer this these ideas.

HPCwire: One of the challenges for external observers is figuring out how the various quantum research programs fit together. Even just at ORNL. I’m thinking of Rafael Pooser who for a while was working on a DOE quantum testbed and Nick Peters, who’s working on quantum networking.

Humble: Within Oak Ridge, we actually have multiple programs funded by Department of Energy and elsewhere to support development of quantum. QSC is one of those. I think of QSC as the focal point for transitioning basic science into the more applied areas. So whether it’s partnership with industry or workforce development, QSC is our focal point there. But we [ORNL] also have our QIS section. This is the one that Nick Peters leads focused on computing, sensing, networking, and core capabilities in that area. We also have a quantum materials program funded directly from Department of Energy. And we have a quantum computing user program that runs out of the OLCF (Oak Ridge Leadership Computing Facility) which is providing access to commercial quantum computing systems.

Then we have all the other user facilities such as the SNS and CNMS (Center for Nanophase Material Sciences) and these nanoscale facilities. So internally, it’s almost like a roundtable of these different stakeholders in this field of quantum. The goal, though, is to keep up with DOE’s priorities in this area. Even before the National Quantum Initiative Act, a letter was sent out emphasizing QIS as a priority across the DOE Office of Science. QSC is kind of a focal point for all of this, but by far, not the only piece.

HPCwire: How do the groups interact? Is it just ad hoc?

Humble: No, we’re much better coordinated. If you think of it as like a Venn diagram, the QIS section and the QSC are actually overlapping each other substantially. But there are parts of the Oakridge quantum research portfolio that are not within QSC, networking, for example. So QSC doesn’t have within its thrust areas, any networking activity. Nick is pursuing quantum networking through a second, a separate DOE program in that area. Now, of course, we’re trying to figure out how do they leverage each other? How can you build up the quantum networks, quantum facilities, and then the quantum science center? But it’s really much more tightly coordinated, because it’s all the same people, all the same laboratories?

HPCwire: Are you still able to do research?

Humble: Yes. I’ve got a couple programs, all of them focused on quantum computing. Some of them are focused on software for quantum computing – so building the compilers and languages that can do this hardware-agnostic programming we’re talking about. Then the other on developing applications in the chemistry space. How do I do quantum chemistry using quantum computers is a really interesting problem because the chemistry community is in some ways perfect to take on quantum computing; they already understand that mathematics and the problem space is well set up. But getting those problems onto these computers is difficult because of the technology limits, and so we spent a lot of time looking for the best ways to program things.

HPCwire: You mentioned a user program separate from QSC. What’s that all about?

Humble: I also manage the quantum computing user program, which is this part of OLCF, that’s providing access to commercial systems. A couple of years ago, Department of Energy started providing us with funds to buy subscriptions to different commercial vendors, [including] IBM, Rigetti and Quantinuum. Those are the three we have in the program right now. We use the OLCF proposal review system to basically recruit users’ projects onto these devices. The big requirement there is you have to publish your results. We’ve got probably close to 40 publications this year alone, more than 70 for the life of the program. What we’re really doing, though, is monitoring the progress that we’re making an on these systems. Are the problems they’re solving getting bigger? Are they getting better results? These types of things. They normally get six months of access, and a chance to renew it for six more. At the moment, we’ve got over 200 users in the program, mostly from the DOE labs, but also from universities and a few from industry.

HPCwire: The systems you mentioned are either superconducting or trapped ion. Have you thought about offering photonics or others types of systems?

Humble: We had Xanadu (photonics) for a while but we ended up not renewing that contract. Of course, we’re keeping our eyes open for all the technologies that are out there. I personally don’t have a strong opinion about which technologies are in front; I think they’re all still being evaluated.

HPCwire: Thanks very much for your time.

 

Travis Humble Bio
Travis Humble is director of the Quantum Science Center, a Distinguished Scientist at Oak Ridge National Laboratory, and director of the lab’s Quantum Computing Institute. Travis is leading the development of new quantum technologies and infrastructure to impact the DOE mission of scientific discovery through quantum computing. As director of the QSC, Travis leads the innovation of scalable, resilient quantum information technologies through new materials, devices, and algorithms and facilitates the transfer of quantum technologies to the broadest audience.

In addition, Travis serves as director of the OLCF Quantum Computing User Program by leading the management and operation of quantum computing technologies for a broad base of users. These revolutionary new approaches to familiar computational problems help reduce algorithmic complexity, reduce computational resource requirements like power and communication, and increase the scale at which state-of-the-art scientific applications perform. In this role, Travis leads the design, development, and benchmarking of quantum computing platforms.

Travis is editor-in-chief for ACM Transactions on Quantum Computing, Associate Editor for Quantum Information Processing, and co-chair of the IEEE Quantum Initiative. Travis also holds a joint faculty appointment with the University of Tennessee Bredesen Center for Interdisciplinary Research and Graduate Education working with students on energy-efficient computing solutions. Travis received a doctorate in theoretical chemistry from the University of Oregon before joining ORNL in 2005.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire