IBM-led Webinar Tackles Quantum Developer Community Needs

By John Russell

July 21, 2020

Quantum computing has many needs before it can become the transformative tool many expect. High on the list is a robust software developer community, not least because developers rarely follow ‘intended use’ rules; instead, they bend and break rules in unforeseen ways that transforms new technology into applications that catalyze market growth, sometimes explosively.

In fact, efforts to fill out the quantum computing ecosystem (hardware and software) have continued to expand rapidly. Just today, for example, the Trump Administration announced the establishment of three new Quantum Leap Challenge Institutes around quantum sensing, hybrid classical-quantum systems, and large-scale quantum system development (See HPCwire coverage). Making full use of the new class of machines – whenever they arrive – will require a robust and sufficiently large quantum developer community.

Roughly a week ago, IBM held the first of a planned series of webinars on quantum computing – this one on The Future of Quantum Software Development and it was a treat. Moderated by long-time Forrester analyst Jeffrey Hammond, the panel included three prominent quantum community voices with diverse quantum expertise – Blake Johnson (control systems delivery lead, IBM, and formerly Rigetti), Prineha Narang (Harvard professor and CEO/founder, Aliro Quantum), and tech entrepreneur William Hurley (CEO/founder, Strangeworks).

It was a “glass-half-full” crowd, so consider their enthusiasm when evaluating comments, but it was also a well-informed group not dismissive of challenges.

“You have to realize where we’re actually at,” said Hurley, “We’re not in the days of AMD versus Intel. We’re in the days of like little mechanical gates versus vacuum tubes and other solutions. These things aren’t computers from my pure developer standpoint. At this moment of time they’re really great equipment for exploring the quantum landscape and are the foundation for building machines.”

Indeed, today’s quantum systems are fragile and complicated and even the notion of gates can be confusing. “At what point is a gate so complicated that a developer is never going to touch it or understand it anyway. Is that 1000 (qubits)? Is it 100,000? Is it a million because you hear people with all these stories about, you know, millions-of-qubits machines. If you had it my question is who would program it? Because that sounds really difficult based on where we’re at, and where we’re trying to go,” said Hurley.

Here are a few themes from the discussion.

  • Fast Followers Will Lose! Quantum computing’s inflection point will be such that that if you’re not already in the game, you won’t catch up. “You can’t be a fast follower. You have to either be placing a bet now or deciding to take the risk of not being involved at the point where something that is clearly a tremendous change in computing happens,” said Hurley with agreement from Johnson.
  • It Will be a Hybrid World. Yes, there will be ‘general purpose’ quantum computers although for a limited set of quantum-appropriate problems. There will also be specialization with various qubit technologies (ion trap, cold atom, superconducting, etc.) excelling on different applications. Lastly, all quantum computing will be done in a hybrid classical-quantum computing environment.
  • QA isn’t Far Away (Maybe). There was adamant agreement by panelists that a decade is too pessimistic…but there was waffling on just how soon quantum advantage (QA) would be achieved. Two-to-five years was the consensus guess-of-choice although Narang declined to make any guess. One had the sense their belief in quantum computing’s inevitable breakthrough trumped worry over when. Meanwhile the QA watch continues.
  • Expanded (Developer) Conversation Needed. The quantum conversation now is mostly between system developers who tend to be physicists and algorithm developers who tend to be physicists. That has to change. It will require better quantum computers, wider access to various qubit technologies, better tools, and a level of software abstraction that lets developers do what they do without worrying about quantum physics.

The panel discussion was casual and substantive if not technically deep, and IBM has posted a link to it. Next up is a webinar on Building a Quantum Workforce (July 28) with there are plans for another in late August (no date yet) on Commercial Use of Quantum Computers.

Clearly, each of the participating companies has its own agenda but nevertheless the give-and-take had an insider feel.

IBM, of course, is the biggest player in quantum computing today with deep expertise in hardware and software and its IBM Q network which offers various levels of access to quantum resources. IBM quantum systems use semiconductor-based superconducting qubits. Panelist Johnson is a relatively recent IBM import from Rigetti. His work is fairly deep in the weeds and focuses on control systems which convert conventional instructions (electrical signals) into quantum processor control signals.

Aliro and Strangeworks are start-ups focused on software.

Aliro describes its offering as a “hardware-independent toolkit for developers of quantum algorithms and applications. The development platform is implemented as a scalable cloud-based service. Features include: access to multiple QC hardware vendors and devices via an intuitive GUI as well as REST API; quantum circuit and hybrid workflow visualization and debugging tools; cross-compilation between high and low-level languages; and hardware-specific optimizations enabling best execution on every supported hardware platform.” CEO Narang is also an assistant professor of computational materials science at Harvard.

Strangeworks says, its “platform is a hardware-agnostic, software inclusive, collaborative development environment that brings the latest advancements, frameworks, open source, and tools into a single user interface.” CEO Hurley is a veteran tech entrepreneur who also chairs the IEEE Quantum Computing Work Group.

These are early days for both of these young companies and they are broadly representative of growing number of start-ups seeking to fill out the quantum computing ecosystem. It is probably best to watch/listen to the conversation directly to get a sense of issues facing software development in quantum computing, but also to gain a glimpse into the mindset of the young companies entering the quantum computing fray.

 

Presented here are a few soundbites (lightly edited) from the panel.

WHAT’S THE STATE OF QUANTUM PROGRAMMING AND WHAT ARE SOME OF THE CHALLENGES?  

Johnson: Recognizing that there was something maybe intimidating about quantum, IBM chose to develop first a graphical interface, a graphical drag and drop way to build quantum circuits where they show the kind of the fundamental unit of quantum compute. So that’s what’s available today in IBM quantum experience. You can drag and drop gates, which are the logical operations and manipulate qubits, which are quantum bits, to build up a program. For more real tasks, you need a real programming interface and we have Qiskit, an open source computing framework developed by IBM which is a Python interface for building quantum circuits and for building algorithms that take advantage of quantum processors.

Narang: What I like about Qiskit is that it’s very accessible. The challenge is, with that abstraction, you lose a lot of the control over the actual hardware, you don’t necessarily have all of the tools to directly program the system. So the pulse level control that IBM has made available is a good way to bridge that. I wonder, as we go towards other types of hardware, how some of the programs that are written for superconducting circuits will be translated to those (other types of hardware) and if everything is not based off of the same pulse level control scheme, what would be a good way of translating? And I don’t have an answer to this.

Hurley: Our approaches is to let all of the languages battle it out. We’re big Qiskit fans. I say that not because we’re on IBM, but we first started [there because] there were already tons people working with it. We’re big supporters, soon to be making our first contributions to it. But you can’t take a developer and make them a quantum developer overnight. Some things that are fundamentally different. For example, if I’m programming on any other platform that’s possible in the world, and I run into that error, I can find it or I know how it works. Whereas [with quantum] what we see happen with developers is they get in and they can instantiate a teleportation thing through Microsoft Quantum Katas or IBM Qiskit, or whatever. Then the moment it breaks, if they don’t understand the fundamental physics behind it, they’re at a dead end.

Narang: A lot of things are not yet possible with quantum hardware but we take for granted in a classical computer. [I’m] thinking about conditional statements and intermediate measurements, things that are not trivial to do in a quantum circuit at the moment, but that’s going to be very important to write more complex quantum programs in future. As those advances come from the hardware side, we think about how to translate those into something that you can use on the on the software side. 

GIVEN THE NASCENT STAGE OF QUBIT TECHNOLOGIES – SUPERCONDUCTING, TRAPPED ION, COLD ATOMS, ETC. – WILL THERE BE SPECIALIZED MACHINES?

Narang:  Tricky question. I’m trying to see how to answer it without making all of my colleagues angry at me. My personal view is there will be certain problems that will run just fine on a variety of hardware. And some that might be more specialized to particular types of hardware and that’s just associated with a physics that is underlying that type of hardware. This will be especially important as we try and map problems from condensed matter and chemistry onto some of these devices. We’ll see that not every technology is ideal or even possible for all kinds of problems. But we don’t have that kind of experience yet.

Photo of IonQ’s ion trap chip with image of ions superimposed over it. Source: IonQ

[Currently] there are only a few different trapped ion systems out there. It’s very hard to get access to those and not many of them have gone the route that IBM did with making at least at the smallest devices available very broadly. So there’s a whole lot more that needs to be done on a simulator before you can actually try it on real hardware. And of course, systems like cold atoms or photonic circuits are really niche. Getting time on those is very expensive and almost unaffordable for the average developer. The best you can hope to offer to them is to say, “Hey, if you write something that works on this genre of systems, we can get you to a point where it runs on other systems” and that’s something that my group is trying to accomplish at the moment.

Hurley: On my desk [I have] iPhone and iPad and a 16-inch laptop, okay? And those things can all do email and they can all surf the web and they can do them really well. But I can’t open Xcode and compile a big program on my iPhone. It’s going to be like that in quantum and it’s going to take it in directions that none of us can imagine it would be foolish to try. There’s already 16 startups I’m following who are making quantum processors for specific applications exactly like Pri described. Look at supercomputing today and high performance computing. There are computers built in on Wall Street just for doing trades or [even] specific types of trades. We’ve seen this throughout computing history, right? I don’t think quantum will be any different. I hope that there’s as many hardware and software solutions available as possible

WHEN WILL WE ACHIEVE QUANTUM ADVANTAGE?

Johnson: The problem is we don’t know exactly how powerful a quantum machine needs to be in order to get the quantum advantage. We’ve committed to and have been on a track of doubling quantum volume[i] (broad performance metric) every year. We put up and made publicly available our first machine with a quantum volume of 32 back in April. We now have eight machines with quantum volume of 32 that are available in IBM quantum experience, and we continue to march along that path of doubling QV [yearly]. At what point is it a powerful enough machine for quantum advantage? I’m not sure. I would say personally, I’d be surprised if we have to get all the way to fault tolerance to find a single application where you can do something with quantum advantage, whether that’s time-to-solution, cost or whatever, against classical resource.

Hurley: If you’re in this industry, if you want to be a developer in this industry, it needs to be a long-term play, a long-term vision that you have. Pessimists think [quantum payoff is] 20 years out, the optimistic [thinks] it’s three years out, and the reality is if you want to be involved in it than you should be preparing now, because all I can tell you is at some point between tomorrow and some future tomorrow it will happen and the inflection point will be steep.

A rendering of IBM Q System One, the world’s first fully integrated universal quantum computing system, currently installed at the Thomas J Watson Research Center. Source: IBM

Johnson: Those of us that build hardware understand that the most critical thing that’s preventing us from reaching quantum advantages is the hardware, and so our first tools are really focused at those domain experts to give them the tools they need to build better hardware. That’s an important audience that we like and we will continue to make better and better tools to serve that audience. But as Pri mentioned, people are doing applications, research with the devices that exist today, and are finding they maybe can’t yet solve a system problem better than they could with a classical resource, but they can solve problems. They’re starting to figure out what the limitations are, how they can squeeze out the most utility out of the devices today, and then getting ready for the devices that exist tomorrow.

So we’re starting to build tools that try to lower the barriers to entry for those people, not the domain experts, but a new audience. We don’t want them to have to learn everything about quantum computing in order to be able to get started. The idea here was to try to reach out to this new audience of developers [such that] they can they can write their programs by describing the problem that they want to solve, in this case, an optimization problem. So they can write down a quadratic formula, they write down some description of the constraints of their optimization problem. They can choose different solvers in both quantum solvers and classical solvers, because a lot of these developers are trying to [understand the value today and [how] quantum works versus the classical.

Hurley: Looking at it from a pure 30 years of seeing new tech coming down and doing the development, I think 10 years sounds very pessimistic now. What most people imagine as a quantum computer in 10 years? Is it a full general purpose machine, whatever? Who knows? But I don’t think you’re going to wait 10 years, I think it’s more in the two-to-five-year range to where there are things that start to become economically advantageous to enterprises and it will probably be in chemistry and material science.

HOW BIG MUST THE DEVELOPER COMMUNITY BECOME AND WHAT’S THE KEY TO DOING THAT?

Hurley: Most of the people building machines aren’t actually talking to developers. They’re talking to physicists who can download development tools and they’re playing the role of developers. Software developers are not necessarily the greatest physicists and physicists are not all the greatest software developers, right? [We need] to drive it to a point where it’s not 200 people who can program the machines; it’s 2 million people. I believe this is the first real leap, then in the next 10 years computing will change more than it has in the last hundred.

Johnson: Open source, I think, is a critical piece to accelerating these kinds of developments because we want these tools to be available to the broadest audience possible. Open source is a great accelerator for making that happen. [But even] on the fastest timescale, we’re a long way off from the App Store experience where on my phone, I can get an app that takes advantage of current resources. But I think we’re not that far away from the developer equivalent of that, which is, you know, package management systems where I can just say, pip install or brew install some package, which is a quantum library for some application domain. The goal is to have like the equivalent of iPhone experiences today.

If I’m gonna an iPhone developer and I want to develop a new app that uses like the augmented reality app to check the position of a basketball, that itself is a non-trivial machine vision task, right? But we don’t ask every iPhone developer to be a machine vision expert. They just they plug in into Expo that they want to use with AR kit, which is Apple’s augmented reality solution. And off they go. We need to get to that point and I don’t think it’s that far away, where a developer can use quantum resources without having to be an expert in quantum computing. 

Google’s Sycamore quantum chip

Hurley: If you think back to 2007, there are 400 of us a week after [the iPhone introduction] with iPhones and people hacking on them. It went to 10s of thousands almost instantly, right? Within six months to a year, and then over the course of 10 or 11 years, you get to 23 million people who are doing that. That mass of developers being involved drives apps exactly as Blake just said. You have to have a mass of developers to do that. That’s where quantum computing faces its biggest challenge, when it gets what I call out of the lab into the real world and all of a sudden there’s a million developers. Because developers rarely use things in the exact way they were intended to be used. They will find more uses, they will find the bugs, they will find the weaknesses. So the faster we can get to that point the better. I mean, Pri what are your thoughts?

Narang: We’re taking some of the circuits run on silicon superconducting to trapped ion and realizing that some so don’t work the same way. And yeah, you forget about developers breaking things in new ways, even experienced people break things in ways they didn’t anticipate, and have to call an engineer and say, hey, how do I fix this? If we expect, you know, a million developers entering the community to have to get answers from an expert engineer, that’s probably not a very scalable model. Something that could be useful is having better simulators that allow you to replicate some of the noise associated with current hardware, to see how things are performing? Also, simple stuff like getting runtime estimates. Getting a yea or nay on if your circuit going to actually fit on the device you’re trying to run it on? That’s a problem I’ve seen a lot of people have. They have a beautiful idea. And they assume that it can run on this really tiny device. I think there’s different levels to how do we make it easier for developers who are entering the field.

Link to panel video, https://www.youtube.com/watch?v=fBP6qTc_fGU&feature=youtu.be

[i] Quantum Volume (QV) is a hardware-agnostic metric that we defined to measure the performance of a real quantum computer. Each system we develop brings us along a path where complex problems will be more efficiently addressed by quantum computing; therefore, the need for system benchmarks is crucial, and simply counting qubits is not enough. As we have discussed in the past, Quantum Volume takes into account the number of qubits, connectivity, and gate and measurement errors. Material improvements to underlying physical hardware, such as increases in coherence times, reduction of device crosstalk, and software circuit compiler efficiency, can point to measurable progress in Quantum Volume, as long as all improvements happen at a similar pace. https://www.ibm.com/blogs/research/2020/01/quantum-volume-32/

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Supercomputing Helps Explain the Milky Way’s Shape

September 30, 2022

If you look at the Milky Way from “above,” it almost looks like a cat’s eye: a circle of spiral arms with an oval “iris” in the middle. That iris — a starry bar that connects the spiral arms — has two stran Read more…

Top Supercomputers to Shake Up Earthquake Modeling

September 29, 2022

Two DOE-funded projects — and a bunch of top supercomputers — are converging to improve our understanding of earthquakes and enable the construction of more earthquake-resilient buildings and infrastructure. The firs Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's annual developer gala last held in 2016. The chipmaker cut t Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely mimic the behavior of the human brain through the use of co Read more…

DOE Announces $42M ‘COOLERCHIPS’ Datacenter Cooling Program

September 28, 2022

With massive machines like Frontier guzzling tens of megawatts of power to operate, datacenters’ energy use is of increasing concern for supercomputer operations – and particularly for the U.S. Department of Energy ( Read more…

AWS Solution Channel

Shutterstock 1818499862

Rearchitecting AWS Batch managed services to leverage AWS Fargate

AWS service teams continuously improve the underlying infrastructure and operations of managed services, and AWS Batch is no exception. The AWS Batch team recently moved most of their job scheduler fleet to a serverless infrastructure model leveraging AWS Fargate. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1166887495

Improving Insurance Fraud Detection using AI Running on Cloud-based GPU-Accelerated Systems

Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…

Do You Believe in Science? Take the HPC Covid Safety Pledge

September 28, 2022

ISC 2022 was back in person, and the celebration was on. Frontier had been named the first exascale supercomputer on the Top500 list, and workshops, poster sessions, paper presentations, receptions, and booth meetings we Read more…

How Intel Plans to Rebuild Its Manufacturing Supply Chain

September 29, 2022

Intel's engineering roots saw a revival at this week's Innovation, with attendees recalling the show’s resemblance to Intel Developer Forum, the company's ann Read more…

Intel Labs Launches Neuromorphic ‘Kapoho Point’ Board

September 28, 2022

Over the past five years, Intel has been iterating on its neuromorphic chips and systems, aiming to create devices (and software for those devices) that closely Read more…

HPE to Build 100+ Petaflops Shaheen III Supercomputer

September 27, 2022

The King Abdullah University of Science and Technology (KAUST) in Saudi Arabia has announced that HPE has won the bid to build the Shaheen III supercomputer. Sh Read more…

Intel’s New Programmable Chips Next Year to Replace Aging Products

September 27, 2022

Intel shared its latest roadmap of programmable chips, and doesn't want to dig itself into a hole by following AMD's strategy in the area.  "We're thankfully not matching their strategy," said Shannon Poulin, corporate vice president for the datacenter and AI group at Intel, in response to a question posed by HPCwire during a press briefing. The updated roadmap pieces together Intel's strategy for FPGAs... Read more…

Intel Ships Sapphire Rapids – to Its Cloud

September 27, 2022

Intel has had trouble getting its chips in the hands of customers on time, but is providing the next best thing – to try out those chips in the cloud. Delayed chips such as Sapphire Rapids server processors and Habana Gaudi 2 AI chip will be available on a platform called the Intel Developer Cloud, which was announced at the Intel Innovation event being held in San Jose, California. Read more…

More Details on ‘Half-Exaflop’ Horizon System, LCCF Emerge

September 26, 2022

Since 2017, plans for the Leadership-Class Computing Facility (LCCF) have been underway. Slated for full operation somewhere around 2026, the LCCF’s scope ext Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

Nvidia Introduces New Ada Lovelace GPU Architecture, OVX Systems, Omniverse Cloud

September 20, 2022

In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as Read more…

Nvidia Shuts Out RISC-V Software Support for GPUs 

September 23, 2022

Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…

AWS Takes the Short and Long View of Quantum Computing

August 30, 2022

It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Chinese Startup Biren Details BR100 GPU

August 22, 2022

Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

Tesla Bulks Up Its GPU-Powered AI Super – Is Dojo Next?

August 16, 2022

Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

Leading Solution Providers

Contributors

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

UCIe Consortium Incorporates, Nvidia and Alibaba Round Out Board

August 2, 2022

The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…

Using Exascale Supercomputers to Make Clean Fusion Energy Possible

September 2, 2022

Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

Nvidia, Qualcomm Shine in MLPerf Inference; Intel’s Sapphire Rapids Makes an Appearance.

September 8, 2022

The steady maturation of MLCommons/MLPerf as an AI benchmarking tool was apparent in today’s release of MLPerf v2.1 Inference results. Twenty-one organization Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Not Just Cash for Chips – The New Chips and Science Act Boosts NSF, DOE, NIST

August 3, 2022

After two-plus years of contentious debate, several different names, and final passage by the House (243-187) and Senate (64-33) last week, the Chips and Science Act will soon become law. Besides the $54.2 billion provided to boost US-based chip manufacturing, the act reshapes US science policy in meaningful ways. NSF’s proposed budget... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire