How the US Could Achieve Superconducting Supercomputing in Five Years

By Tiffany Trader

December 9, 2014

The Intelligence Advanced Research Projects Activity (IARPA) has officially commenced a multi-year research effort to develop a superconducting computer as a long-term solution to the power, cooling and space constraints that afflict modern high-performance computing. First revealed in February 2013, when the agency put out a call for proposals, the Cryogenic Computer Complexity (C3) program aims to pave the way for a new generation of superconducting supercomputers that are far more energy efficient than machines based on complementary metal oxide semiconductor (CMOS) technology.

Studies indicate the technology, which uses low temperatures in the 4-10 kelvin range to enable information to be transmitted with minimal energy loss, could yield one-petaflop systems that use just 25 kW and 100 petaflop systems that operate at 200 kW, including the cryogenic cooler. Compare this to the current greenest system, the L-CSC supercomputer from the GSI Helmholtz Center, which achieved 5.27 gigaflops-per-watt on the most-recent Green500 list. If scaled linearly to an exaflop supercomputing system, it would consume about 190 megawatts (MW), still quite a bit short of DARPA targets, which range from 20MW to 67MW.IARPA C3 program performance projections

The C3 project, which recently awarded an unspecified amount of funding to vendors IBM, Raytheon-BBN and Northrop Grumman Corporation, will focus on developing and integrating superconducting logic with new kinds of cryogenic memory as the basis for a small-scale working model of a superconducting computer. Superconducting circuit fabrication will be provided by MIT Lincoln Laboratory and independent test and evaluation will be carried out by NIST, Boulder.

Funding high-risk/high-payoff research in support of national intelligence is IARPA’s speciality, and superconducting computing is seen as a serious post-silicon contender for HPC, according to C3 Program Manager Dr. Marc Manheimer, who was interviewed for this piece. A long-time laboratory physicist with expertise in superconducting and in cryogenic magnetic phenomena, Dr. Manheimer provided additional details about the state of the technology and the scope of the program.

HPCwire: You’ve written about the promise of superconducting and cryogenic technologies to address the space and energy challenges of traditional silicon-based supercomputing. How tractable a problem is superconducting supercomputing?

Dr. Manheimer: IARPA only takes on the hardest problems, so it’s a serious technical challenge, but we think we have a path forward to solve all of the challenges associated with superconducting supercomputing. In particular, the challenge that I see as the hardest is to develop high-density, high-efficiency, low-latency, cryogenic memory.

HPCwire: More so than the logic?

Dr. Manheimer: The logic has been around in primitive forms for about 25-30 years, and a number of primitive circuits have been fabricated and tested, so we think we can move forward with the logic in a pretty straight-forward manner. We’ve developed a fabrication facility at the Lincoln Laboratory, and we’re upgrading that so that fab can produce circuits at the level that we need to prove out this technology. On the other hand, these cryogenic memory ideas – the other half of the program – are very new, and are for the most part untested, and we will have to go through developing the basic memory cells and put them into an array and drive them and control them in a pretty short time frame as compared with typical technology development.

HPCwire: The release put out by IARPA mentions the program would be carried out in two stages: components development for the memory and logic subsystems, then integration. Is the roadmap still following the three year plus two year split outlined last year?

Dr. Manheimer: Yes, for the first three years, the logic people have to produce some key demonstration circuits and the memory people have to produce a small-scale but complete memory, including decoders and drivers, and we also need the performers to develop a plan for how they are going to integrate these. That’s the first stage. Then we’ll have another call for proposals for the second stage.

HPCwire: So the three vendors that were awarded funding, are these first-phase partners?

Dr. Manheimer: These are independent projects. Northrup Grumman has two projects that they’ve succeeded in getting funding for. One is a logic program and one is a memory program. The Raytheon Corporation is running a memory program, and the IBM Corporation is running a logic program. So there are two logic projects and two memory projects.

HPCwire: The release also mentioned standard benchmarking programs…can you tell me more about these?

Dr. Manheimer: We’re developing a prototype, a small-scale computer, and we’re going to have to figure out what applications it’s suitable for as we scale it up. We’re going to be talking to a variety of customers with a variety of application types and we’re going to have our customers develop programs that they think will be useful in telling them what the potential of superconducting supercomputing computing is to their applications.

HPCwire: How different of an ecosystem is this compared to traditional silicon-based CMOS?

Dr. Manheimer: What we’re planning to do is reuse programs so we can use standard software, but one of the things that is missing from our ecosystem that is readily available in the semiconducting system is the development software suite. Right now if you want to develop a superconducting logic circuit of any scale, you pretty much have to do it on your own. There is no Mentor Graphics program, for example, available for superconducting computing.

HPCwire: Is IARPA thinking about specific applications yet and is this a general purpose system?

Dr. Manheimer: For now, we’re thinking general-purpose computing and we’ll see what develops in the next few years.

HPCwire: Do you think superconducting logic will be the main successor to silicon-based CMOS or is it more likely that we will have multiple computing device-level technologies that will evolve to fill this gap?

Dr. Manheimer: For high-performance computing, I think superconducting supercomputing has a high probability of being the winner. For smaller scale applications, I think CMOS will be perfectly fine for providing general-purpose computing for almost everyone. Clearly, superconducting computing won’t be useful for any portable format. Everyone will be carrying around his or her own cryogenic cooler…no, that’s not going to happen, so CMOS will be around for a long time.

HPCwire: Speaking of cryo-coolers, how much space do they take up?

Dr. Manheimer: Not much, I did a comparison between Titan at Oak Ridge and a projection of our superconducting technology, and we think that including the cryo-cooler, our supercomputer will take up about one-twentieth of the floor space, and that doesn’t include Titan’s cooling system.

HPCwire: Moving over to performance, what kind of performance goals have you set for the project and in what kind of timeframe? Is it possible to get to exascale and beyond with this technology?

Dr. Manheimer: We’ve only set goals for the C3 program, and we hope to be able to judge from the C3 program results how scalable the technology is. But we have very specific energy goals in mind for C3 and throughput goals, which are hosted on the website [and depicted below]. There are two things that we have to learn from C3. The first is whether you can actually build a supercomputer based on this technology if you really wanted to, and second do you really want to? Is it going to be prohibitively expensive and is the amount of technology required going to be too high with just no clear path forward? So those are questions that we have to seriously address at the end of C3.

IARPA C3 program diagram

HPCwire: Any estimate as to how much it will cost to build the world’s first superconducting supercomputer?

Dr. Manheimer: Not really, but if you look at conventional CMOS computers, these big supercomputers take several hundred million dollars.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training results (July 2020), it was almost entirely The Nvidia Show, a p Read more…

By John Russell

With Optane Gaining, Intel Exits NAND Flash

October 21, 2020

In a sign that its 3D XPoint memory technology is gaining traction, Intel Corp. is departing the NAND flash memory and storage market with the sale of its manufacturing base in China to SK Hynix of South Korea. The $9 Read more…

By George Leopold

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing another major EuroHPC design win. Finnish supercomputing cent Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a variety of observatories and astronomers – but when COVID Read more…

By Oliver Peckham

AWS Solution Channel

Live Webinar: AWS & Intel Research Webinar Series – Fast scaling research workloads on the cloud

Date: 27 Oct – 5 Nov

Join us for the AWS and Intel Research Webinar series.

You will learn how we help researchers process complex workloads, quickly analyze massive data pipelines, store petabytes of data, and advance research using transformative technologies. Read more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with the enterprise strengths of its recent acquisitions, notably Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

ROI: Is HPC Worth It? What Can We Actually Measure?

October 15, 2020

HPC enables innovation and discovery. We all seem to agree on that. Is there a good way to quantify how much that’s worth? Thanks to a sponsored white pape Read more…

By Addison Snell, Intersect360 Research

Preparing for Exascale Science on Day 1

October 14, 2020

Science simulation, visualization, data, and learning applications will greatly benefit from the massive computational resources available with future exascal Read more…

By Linda Barney

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This