Farewell 2020: Bleak, Yes. But a Lot of Good Happened Too

By John Russell

December 30, 2020

Here on the cusp of the new year, the catchphrase ‘2020 hindsight’ has a distinctly different feel. Good riddance, yes. But also proof of science’s power to mobilize and do good when called upon. There’s gratitude by those who came through less scathed, and, maybe more willingness to assist those who didn’t.

Despite the unrelenting pandemic, high performance computing (HPC) proved itself an able member of the worldwide community of pandemic fighters. We should celebrate that, perhaps quietly since the work isn’t done. HPC made a significant difference in speeding up and enabling vastly distributed research and funneling the results to those who could turn them into patient care, epidemiology guidance, and now vaccines. Remarkable really. Necessary, of course, but actually got done too. (Forget the quarreling; that’s who we are.)

Across the Tabor family of publications, we’ve run more than 200 pandemic-related articles. I counted nearly 70 significant pieces in HPCwire. The early standing up of Fugaku at RIKEN, now comfortably astride the Top500 for a second time and by a significant margin, to participate in COVID-19 research is a good metaphor for HPC’s mobilization. Many people and organizations contributed to the HPC v. pandemic effort and that continues.

Before spotlighting a few pandemic-related HPC activities and digging into a few other topics, let’s do a speed-drive through the 2020 HPC/AI technology landscape.

Consolidation continued among chip players (Nvidia/Arm, AMD/Xilinx) while the AI chip newcomers (Cerebras, Habana (now Intel), SambaNova, Graphcore et. al.) were winning deals. Nvidia’s new A100 GPU is amazing and virtually everyone else is taking potshots for just that reason. Suddenly RISC-V looks very promising. Systems makers weathered 2020’s storm with varying success while IBM seems to be winding down its HPC focus; it also plans to split/spin off its managed infrastructure services. Firing up Fugaku (notably a non-accelerated system) quickly was remarkable. The planned Frontier (ORNL) supercomputer now has the pole position in the U.S. exascale race ahead of the delayed Aurora (ANL).

The worldwide quantum computing frenzy is in full froth as the U.S. looks for constructive ways to spend its roughly $1.25 billion (U.S. Quantum Initiative) and, impressively, China just issued a demonstration of quantum supremacy. There’s a quiet revolution going on in storage and memory (just ask VAST Data). Nvidia/Mellanox introduced its line of 400 Gbs network devices while Ethernet launched its 800 Gbs spec. HPC-in-the-cloud is now a thing – not a soon-to-be thing. AI is no longer an oddity but quickly infusing throughout HPC (That happened fast).

Azure’s AI Supercomputer

Last but not least, hyperscalers demonstrably rule the IT roost. Chipmakers used to, consistently punching above their weight (sales volume). Not so much now:

  • “Nvidia, now the largest U.S. chipmaker by market cap, has a value of $330 billion, and Intel is at $207 billion. The cloud behemoths, Amazon, Microsoft and Google-parent Alphabet, each top $1 trillion in market valuation.” – Wall Street Journal (Dec. 20, 2020).
  • “In the high-stakes, winner-take-all world of the hyperscale elite, 11 companies spent more than $1 billion apiece on IT infrastructure in 2018, three spent more than $5 billion and one, Google, broke the $10 billion spend barrier,” reported HPCwire last year when Intersect360 Research reported the worldwide hyperscale market totaled $57 billion in IT spending in 2018, a 30 percent expansion over 2017.” – HPCwire (October 31, 2019)

Ok then. Apologies for the many important topics omitted (e.g. exascale and leadership systems, neuromorphic tech, software tools (can oneAPI flourish?), newer fabrics, optical interconnect, etc.).

Let’s start.

  1. COVID-19 PROVED SCIENCE MATTERS GREATLY

I want to highlight two HPC pandemic-related efforts, one current and one early on, and also single out the efforts of Oliver Peckham, HPCwire’s editor who leads our pandemic coverage which began in earnest with articles on March 6 (Summit Joins the Fight Against the Coronavirus) and March 13 (Global Supercomputing Is Mobilizing Against COVID-19). Actually, the very first piece – Tech Conferences Are Being Canceled Due to Coronavirus, March 3 –  was more about interrupted technology events and we picked it up from our sister pub, Datanami which ran it on March 2. We’ve since become a virtualized event world.

Here’s an excerpt from the first Summit piece about modeling COVID-19’s notorious spike:

Micholas Smith, a postdoctoral researcher at the University of Tennessee/ORNL Center for Molecular Biophysics (UT/ORNL CMB), used early studies and sequencing of the virus to build a virtual model of the spike protein. [A]fter being granted time on Summit through a discretionary allocation, Smith and his colleagues performed a series of molecular dynamics simulations on the protein, cycling through 8,000 compounds within a few days and analyzing how they bound to the spike protein, if at all.

The Summit supercomputer.

“Using Summit, we ranked these compounds based on a set of criteria related to how likely they were to bind to the S-protein spike,” Smith said in an interview with ORNL. In total, the team identified 77 candidate “small-molecule” compounds (such as medications) that they considered worthy of further experimentation, helping to narrow the field for medical researchers.

“It took us a day or two whereas it would have taken months on a normal computer,” said Jeremy Smith, director of UT/ORNL CMB and principal researcher for the study. “Our results don’t mean that we have found a cure or treatment for the Wuhan coronavirus. We are very hopeful, though, that our computational findings will both inform future studies and provide a framework that experimentalists will use to further investigate these compounds. Only then will we know whether any of them exhibit the characteristics needed to mitigate this virus.”

The flood (and diversity) of efforts that followed was startling. Oliver’s advice on what to highlight catches the flavor of the challenge: “You could go with something like the Fugaku vs. COVID-19 piece or the grocery store piece, maybe contrast them a bit, earliest vs. current simulations of viral particle spread…or something like the LANL retrospective piece vs. the piece I just wrote up on their vaccine modeling. Think that might work for a ‘how far we’ve come’ angle, either way.”

There’s too much to cover.

Last week we ran Oliver’s article on LANL efforts to optimize vaccine distribution (At Los Alamos National Lab, Supercomputers Are Optimizing Vaccine Distribution). Here’s a brief excerpt:

“The new vaccines from Pfizer and Moderna have been deemed highly effective by the FDA; unfortunately, doses are likely to be limited for some time. As a result, many state governments are struggling to weigh difficult choices – should the most exposed, like frontline workers, be vaccinated first? Or perhaps the most vulnerable, like the elderly and immunocompromised? And after them, who’s next?

“LANL was no stranger to this kind of analysis: earlier in the year, the lab had used supercomputer-powered tools like EpiCast to simulate virtual cities populated by individuals with demographic characteristics to model how COVID-19 would spread under different conditions. “The first thing we looked at was whether it made a difference to prioritize certain populations – such as healthcare workers – or to just distribute the vaccine randomly,” said Sara Del Valle, the LANL computational epidemiologist who is leading the lab’s COVID-19 modeling efforts. “We learned that prioritizing healthcare workers first was more effective in reducing the number of COVID cases and deaths.”

You get the idea. The well of HPC efforts to tackle and stymie COVID-19 is extremely deep. Turning unproven mRNA technology into a vaccine in record time was awe-inspiring and required many disciplines. For those unfamiliar with mRNA mechanism here’s a brief CDC explanation as it relates to the new vaccines. Below are links to a few HPCwire articles on the worldwide effort to bring HPC computational power to bear. (The last is a link to the HPCwire COVID-19 Archive which has links to all our major pandemic coverage):

COVID COVERAGE LINKS

Global Supercomputing Is Mobilizing Against COVID-19 (March 12, 2020)

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations (November 19, 2020)

Supercomputer Research Leads to Human Trial of Potential COVID-19 Therapeutic Raloxifene (October 29, 2020)

AMD’s Massive COVID-19 HPC Fund Adds 18 Institutions, 5 Petaflops of Power (September 14, 2020)

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms (July 28, 2020)

Researchers Use Frontera to Investigate COVID-19’s Insidious Sugar Coating (June 16, 2020)

COVID-19 HPC Consortium Expands to Europe, Reports on Research Projects (May 28, 2020)

At SC20, an Expert Panel Braces for the Next Pandemic (December, 17, 2020)

What’s New in Computing vs. COVID-19: Cerebras, Nvidia, OpenMP & More (May 18, 2020)

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support (April 22, 2020)

Pandemic ‘Wipes Out’ 2020 HPC Market Growth, Flat to 12% Drop Expected (March 31, 2020)

Folding@home Turns Its Massive Crowdsourced Computer Network Against COVID-19 (March 16, 2020)

2020 HPCwire Awards Honor a Year of Remarkable COVID-19 Research (December, 23, 2020)

HPCWIRE COVID-19 COVERAGE ARCHIVE

  1. CHIPS GALORE – TAKE A NUMBER TO BE SERVED
NVIDIA A100 80GB GPU

Making sense of the processor world is challenging. Microprocessors are still the workhorses in mainstream computing with Intel retaining its giant market share despite AMD’s encroachment. That said, the rise of heterogeneous computing and blended AI/HPC requirements has shifted focus to accelerators. Nvidia’s A100 GPU (54 billion transistors on 826mm2 of silicon, world’s largest seven-nanometer chip) was launched this spring. Then at SC20 Nvidia announced an enhanced version of the A100, doubling its memory to 80GB; it now delivers 2TB/s of bandwidth. The A100 is an impressive piece of work.

The A100’s most significant advantage, says Rick Stevens, associate lab director, Argonne National Laboratory, is its multi-instance GPU capability.

“For many people the problem is achieving high occupancy, that is, being able to fill the GPU up – because that depends on how much work you have to do. [By] introducing this MIG, this multi instance stuff that they have, they’re able to virtualize it. Most of the real-world performance wins are actually kind of throughput wins by using the virtualization. What we’ve seen is…our big performance improvement is not that individual programs run much faster — it’s that we can run up to seven parallel things on each GPU. When you add up the aggregate performance, you get these factors of three to five improvement over the V100,” said Stevens.

Meanwhile, Intel’s XE GPU line is slowly trickling to market, mostly in card form. At SC20 Intel announced plans to make its high performance discrete GPUs available to early access developers. Notably, the new chips have been deployed at ANL and will serve as a transitional development vehicle for the future (2022) Aurora supercomputer, subbing in for the delayed Intel XE-HPC (“Ponte Vecchio”) GPUs that are the computational backbone of the system.

AMD, also at SC20, launched its latest GPU – the MI100. AMD says it delivers 11.5 teraflops peak double-precision (FP64), 46.1 teraflops peak single-precision matrix (FP32), 23.1 teraflops peak single-precision (FP32), 184.6 teraflops peak half-precision (FP16) floating-point performance, and 92.3 peak teraflops of bfloat16 performance. HPCwire reported, “AMD’s MI100 GPU presents a competitive alternative to Nvidia’s A100 GPU, rated at 9.7 teraflops of peak theoretical performance. However, the A100 is returning even higher performance than that on its FP64 Linpack runs.” It will be interesting to see the specs of the GPU AMD eventually fields for use in its exascale system wins.

The stakes are high in what could become a GPU war. Today, Nvidia is the market leader in HPC.

Turning back to CPUs, which many in HPC/AI have begun to regard as the lesser of CPU/GPU pairings. Perhaps that will change with the spectacular showing of Fujitsu’s A64FX at the heart of Fugaku. Nvidia’s proposed acquisition of Arm, not a done deal yet (regulatory concerns), would likely inject fresh energy in what was already a surging Arm push into the datacenter. Of course, Nvidia has jumped into the systems business with its DGX line and presumably wants a home-grown CPU. The big mover of the last couple of years, AMD’s Epyc microprocessor line, continues its steady incursion into Intel x86 territory.

There’s not been much discussion around Power10 beyond IBM’s summer announcement that Power10 would offer a ~3x performance gain and ~2.6x core efficiency gain over Power9. The new executive director of OpenPOWER Foundation, James Kulina, says attracting more chipmakers to build Power devices is a top goal. We’ll see. RISC-V is definitely drawing interest but exactly how it fits into the processor puzzle is unclear. Esperanto unveiled a RISC-V based chip aimed at machine learning with 1,100 low-power cores based on the open-source RISC-V. Esperanto reported a goal of 4,000 cores on a single device. Europe is betting on RISC-V. However, at least near-term, RISC-V variants are seen as specialized chips.

The CPU waters are murkier than ever.

Sort of off in a land of their own are AI chip/system players. Their proliferation continues with the early movers winning important deployments. Some observers think 2021 will start sifting winners from the losers. Let’s not forget that last year Intel stopped development of its newly-acquired Nervana line in favor of its even more newly-acquired Habana products. It’s a high-risk, high-reward arena still.

PROCESSOR COVERAGE LINKS

Intel Xe-HP GPU Deployed for Aurora Exascale Development

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

LLNL, ANL and GSK Provide Early Glimpse into Cerebras AI System Performance

David Patterson Kicks Off AI Hardware Summit Championing Domain Specific Chips

Graphcore’s IPU Tackles Particle Physics, Showcasing Its Potential for Early Adopters

Intel Debuts Cooper Lake Xeons for 4- and 8-Socket Platforms

Intel Launches Stratix 10 NX FPGAs Targeting AI Workloads

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

AMD Launches Three New High-Frequency Epyc SKUs Aimed at Commercial HPC

IBM Debuts Power10; Touts New Memory Scheme, Security, and Inferencing

AMD’s Road Ahead: 5nm Epyc, CPU-GPU Coupling, 20% CAGR

AI Newcomer SambaNova GA’s Product Lineup and Offers New Service

Japan’s AIST Benchmarks Intel Optane; Cites Benefit for HPC and AI

  1. QUIET REVOLUTION IN STORAGE AND MEMORY

Storage and memory don’t get the attention they deserve. 3D XPoint memory (Intel and Micron), declining flash costs, and innovative software are transforming this technology segment. Hard disk drives and tape aren’t going away, but traditional storage management approaches such as tiering based on media type (speed/capacity/cost) are under attack. Newcomers WekaIO, VAST Data, and MemVerge are all-in on solid state, and a few leading-edge adopters (NERSC/Perlmutter) are taking the plunge. Data-intensive computing driven by the data flood and AI compute requirements (gotta keep those GPUs busy!) are big drivers.

“Our storage systems typically see over an exabyte of I/O annually. Balancing this I/O intensive workload with the economics of storage means that at NERSC, we live and breathe tiering. And this is a snapshot of the storage hierarchy we have on the floor today at NERSC. Although it makes for a pretty picture, we don’t have storage tiering because we want to, and in fact, I’d go so far as to say it’s the opposite of what we and our users really want. Moving data between tiers has nothing to do with scientific discovery,” said NERSC storage architect Glenn Lockwood during an SC20 panel.

“To put some numbers behind this, last year we did a study that found that between 15% and 30% of that exabyte of I/O is not coming from our users’ jobs, but instead coming from data movement between storage tiers. That is to say that 15% to 30% of the I/O at NERSC is a complete waste of time in terms of advancing science. But even before that study, we knew that both the changing landscape of storage technology and the emerging large-scale data analysis and AI workloads arriving at NERSC required us to completely rethink our approach to tiered storage,” said Lockwood.

Not surprisingly Intel and Micron (Optane/3D XPoint) are trying to accelerate the evolution. Micron released what it calls a heterogeneous-memory storage engine (HSE) designed for solid-state drives, memory-based storage and, ultimately, applications requiring persistent memory. “Legacy storage engines born in the era of hard disk drives have historically failed to architecturally provide for the increased performance and reduced latency of next-generation nonvolatile media,” said the company. Again, we’ll see.

Software defined storage leveraging newer media has all the momentum at the moment with all of the established players IBM, DDN, Panasas, etc., mixing those capabilities into their product sets. WekaIO and Intel have battled it out for the top IO500 spot the last couple of years and Intel’s DAOS (distributed asynchronous object store) is slated for use in Aurora.

“The concept of asynchronous IO is very interesting,” noted Ari Berman, CEO, BioTeam research consultancy. “It’s essentially a queue mechanism at the system write level so system waits in the processors don’t have to happen while a confirmed write back comes from the disks. So asynchronous IO allows jobs can keep running while you’re waiting on storage to happen, to a limit of course. That would really improve the data input-output pipelines in those systems. It’s a very interesting idea. I like asynchronous data writes and asynchronous storage access. I can see there very easily being corruption that creeps into those types of things and data without very careful sequencing. It will be interesting to watch. If it works it will be a big innovation.”

Change is afoot and the storage technology community is adapting. Memory technology is also advancing.

Micron introduced a 176-layer 3D NAND flash memory at SC230 that it says increases read and write densities by more than 35 percent. JEDEC published the DDR5 SDRAM spec, the next-generation standard for random access memory (RAM) in the summer. Compared to DDR4, the DDR5 spec will deliver twice the performance and improved power efficiency, addressing ever-growing demand from datacenter and cloud environments, as well as artificial intelligence and HPC applications. “At launch, DDR5 modules will reach 4.8 Gbps, providing a 50 percent improvement versus the previous generation. Density goes up four-fold with maximum density increasing from 16 Gigabits per die to 64 Gigabits per die in the new spec.” JEDEC representatives indicated there will be 8 Gb and 16 Gb DDR5 products at launch.

There are always the wildcards. IBM’s memristive technology is moving closer to practical use. One outlier is DNA-based storage. Dave Turek, longtime IBMer, joined DNA storage start-up Catalog this year and, says Catalog is working on proof of concepts with government agencies and a number of Fortune 500 companies. “Some of these are who’s-who HPC players, but some are non-HPC players — many names you would recognize…We’re at what I would say is the beginning of the commercial beginning.” Again, we’ll see.

STORAGE & MEMORY LINKS

SC20 Panel – OK, You Hate Storage Tiering. What’s Next Then?

Intel’s Optane/DAOS Solution Tops Latest IO500

Startup MemVerge on Memory-centric Mission

HPC Strategist Dave Turek Joins DNA Storage (and Computing) Company Catalog

DDN-Tintri Showcases Technology Integration with Two New Products

Intel Refreshes Optane Persistent Memory, Adds New NAND SSDs

Micron Boosts Flash Density with 176-Layer 3D NAND

DDR5 Memory Spec Doubles Data Rate, Quadruples Density

IBM Touts STT MRAM Technology at IDEM 2020

The Distributed File Systems and Object Storage Landscape: Who’s Leading?

  1. THE EXPANDING QUANTUM SCIENCES TENT
D-Wave’s Advantage chip

It’s tempting to omit quantum computing this year. Too much happened to summarize easily and the overall feel is of steady carry-on progress from 2019. There was, perhaps, a stronger pivot – at least by press release count – towards seeking early applications for near-term noisy intermediate scale quantum (NISQ) computers. Ion trap qubit technology got another important player in Honeywell which formally rolled out its effort and first system. Intel also stepped out from the shadows a bit in terms of showcasing its efforts. D-Wave launched a giant 5000-qubit machine (Advantage), again using a quantum annealing approach that’s different from universal gate-based quantum system. IBM announced a stretch goal of achieving one million qubits!

Calling quantum computing a market is probably premature but monies are being spent. The Quantum Economic Development Consortium (QED-C) and Hyperion Research issued a forecast that projects the global quantum computing (QC) market – worth an estimated $320 million in 2020 – to grow 27% CAGR between 2020 and 2024. That would reach approximately $830 million by 2024. Chump change? Perhaps but real activity.

IBM’s proposed Quantum Volume metric has drawn support as a broad benchmark of quantum computer performance. Honeywell promoted the 128QV score of its launch system. In December IBM reported it too had achieved a 128QV. The first QV reported by IBM was 16 in 2019 at the APS March meeting. Just what a QV of 128 means in determining practical usefulness is unclear but it is steady progress and even Intel agrees that QV is as good as any measure at the moment. DoE is also working on benchmarks, focusing a bit more on performance on given workloads.

“[One] major component of benchmarking is asking what kind of resources does it take to run this or that interesting problem. Again, these are problems of interest to DoE, so basic science problems in chemistry and nuclear physics and things like that. What we’ll do is take applications in chemistry and nuclear physics and convert them into what we consider a benchmark. We consider it a benchmark when we can distill a metric from it. So the metric could be the accuracy, the quality of the solution, or the resources required to get a given level of quality,” said Raphael Pooser, PI for DoE’s Quantum Testbed Pathfinder project at ORNL, during an HPCwire interview.

Raphael Pooser, ORNL

Next year seems likely to bring more benchmarking activity around system quality, qubit technology, and performance on specific problem sets. Several qubit technologies still vie for sway – superconducting, trapped ion, optical, quantum dots, cold atoms, et al. The need to operate at near-zero (K) temps complicates everything. Google claimed achieving Quantum Supremacy last year. This year a group of China researchers also did so. The groups used different qubit technologies (superconducting v. optical) and China’s effort tried to skirt criticisms that were lobbed at Google’s effort. Frankly, both efforts were impressive. Russia reported early last year it would invest $790 million in quantum with achieving quantum supremacy as one goal.

What’s happening now is a kind of pell-mell rush among a larger and increasingly diverse quantum ecosystem (hardware, software, consultants, governments, academia). Fault tolerant quantum computing still seems distant but clever algorithms and error mitigation strategies to make productive use of NISQ systems, likely on narrow applications, look more and more promising.

Here are a few snapshots:

  • IBM’s Million Qubit Gambit. IBM is thinking big and in September outlined a roadmap with dates and system names for reaching 1000 qubits by 2023. That’s a tall order. It’s also developing a 10-foot-tall and 6-foot-wide “super-fridge,” internally codenamed “Goldeneye,” intended to be able to house a million-qubit system. IBM is the game-setter so far in QC.
  • Honeywell’s Trapped Ion Bet. Hoping to leverage its control systems expertise Honeywell has bet on a trapped ion qubit technology and brought a 10-qubit system to market this summer, and announced a subscription fee model for access. It reported the system had a QV128. Bob Sorensen, quantum watcher for Hyperion, was impressed saying, “It is important to note that Honeywell’s emphasis here is not limited exclusively to touting qubit counts. Indeed, this QV number is based on a relatively low qubit count configuration at ten.”
  • Atos/Fujitsu Go Digital. Much of quantum computing isn’t exactly quantum. Atos introduced a dev platform, Kvasi, which consists of an accessible programming environment, optimization modules to adapt the code to targeted quantum hardware constraints, and simulators that allow users to test their algorithms and visualize results. Fujitsu has adapted algorithms from adiabatic annealing to be run on classical machines. Toshiba says its algorithm delivers a 10-fold improvement for a select class of computational problems, without the need for exotic hardware. This sharing of quantum ideas back to classical systems could be interesting and productive.
  • Hyperscalers Ramp Up. AWS’s Braket service went live this summer. It provides tools and access to several different vendors’ quantum systems (D-Wave, Rigetti, and IonQ). Azure announced its cloud offering roughly a year ago. Google is expected to also offer broader cloud-access to its system though that hasn’t occurred yet. Many QC vendors (IBM, Rigetti, D-Wave) also offer portal access. It is getting easier to experiment with a variety of platforms.
  • Intel – Supplier to QC Community? Intel believes it CMOs-based quantum dot technology used for qubits will allow it to leverage its manufacturing expertise to scale up. As interesting is Intel work on cry-controller technology. Intel recently announced Horse Ridge2, the second generation of its cryo-controller chip for quantum technology. Though designed for Intel QC systems, it’s not such a big leap to think of Intel supplying these devices to others. Horse Ridge attacks the nasty problem of trying to squeeze control wires to qubits into dilution refrigerators. Only so many wires – too few generally – can fit.
  • What to Make of D-Wave. 5,000 qubits. That’s a lot of qubits. Gate-based systems are Lilliputian by comparison, currently ranging from just a few qubits to 50 or so. D-Wave’s 5,000-qubit goliath named Advantage, features 15-way qubit interconnectivity. D-Wave’s focus is on producing real-world practical results with its clients and it has some active big companies working with it.

The persistent question is when will all of these efforts pay off and will they be as game-changing as many believe. With new money flowing into quantum, one has the sense there will be few abrupt changes in the next couple years barring untoward economic turns.

QUANTUM COVERAGE LINKS

IBM’s Quantum Race to One Million Qubits

Google’s Quantum Chemistry Simulation Suggests Promising Path Forward

Intel Connects the (Quantum) Dots in Accelerating Quantum Computing Effort

D-Wave Delivers 5000-qubit System; Targets Quantum Advantage

Honeywell Debuts Quantum System, ‘Subscription’ Business Model, and Glimpse of Roadmap

Global QC Market Projected to Grow to More Than $800 million by 2024

ORNL’s Raphael Pooser on DoE’s Quantum Testbed Project

Rigetti Computing Wins $8.6M DARPA Grant to Demonstrate Practical Quantum Computing

Braket: Amazon’s Cloud-First Quantum Environment Is Generally Available

IBM-led Webinar Tackles Quantum Developer Community Needs

Microsoft’s Azure Quantum Platform Now Offers Toshiba’s ‘Simulated Bifurcation Machine’

  1. THIS AND THAT FROM THE HPC COMMUNITY
Brad McCredie, now at AMD

As always there’s personnel shuffling. Lately hyperscalers have been taking HPC folks. Two long-time Intel executives, Debra Goldfarb and Bill Magro, recently left for the cloud – Goldfarb to AWS as director for HPC products and strategy, and Magro to Google as CTO for HPC. Going in the other direction, John Martinis left Google’s quantum development team and recently joined Australian start-up Silicon Quantum Computing. Ginny Rometty, of course, stepped down as CEO and chairman at IBM. IBM’s long-time HPC exec Dave Turek left to take position with DNA storage start-up, Catalog, and last January, IBMer Brad McCredie joined AMD as corporate VP, GPU platforms.

OpenPOWER is getting a reboot under new executive director James Kulina, “[Historically the connotation has been this is an IBM pet project. [Joining the Linux Foundation] sets us up for the next chapter where we can actually go and be this fully independent entity, where we can build out an ecosystem, both hardware, silicon hardware, systems and software.” The next move sounds more broadly enterprise-focused than HPC. How do you feel about remaking NSF? Proposals were submitted in the spring. The entity would be renamed the National Science and Technology Foundation, with arms for each. Worrisome or hopeful? Haven’t heard much since.

MLPerf.org, the follow-on effort to DAWNBench to establish a standard AI benchmarking effort, has now run both training and inference suites and also introduced an HPC-specific suite in November. The organization is growing but so far Nvidia is main accelerator company participating…and dominating. That could be a problem. David Kanter, executive director of MLCommons, MLPerf’s parent organization, agrees more outreach is needed to bring in other chipmakers.

Intersect360 Research

DOE’s Advanced Scientific Computing Advisory Committee has accepted a subcommittee report calling for a ten-year AI plan that loosely emulates the Exascale Computing Initiative. We’ll see where that goes. Market forecasts for next year remain uncertain. Here are good retro/prospectives (linked) from Intersect360 Research and Hyperion Research. The SC20 keynote on climate change by Bjorn Stevens was terrific – here’s a link. Thomas Sterling’s annual ISC keynote provides a good look at pandemic fighting and HPC highlights.

The IEEE Computer Society issued its 2021 predictions. Number six shouldn’t be surprising, “We will see increasing progress towards delivering medium HPC systems as a Service.” But my favorite prediction is number nine on the 2021 list – “Low Latency Virtual Music Collaboration: We predict emergence of musical appliances that will enable real-time virtual rehearsal.” I’m ready and need the live practice.

It’s been a kaleidoscopic year despite the ever-present pandemic occupying the center frame. Lots of shifting pieces including terrific science by terrific people making a difference. Happy Holidays.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Digging into the Atos-Nimbix Deal: Big US HPC and Global Cloud Aspirations. Look out HPE?

August 2, 2021

Behind Atos’s deal announced last week to acquire HPC-cloud specialist Nimbix are ramped-up plans to penetrate the U.S. HPC market and global expansion of its HPC cloud capabilities. Nimbix will become “an Atos HPC c Read more…

Berkeley Lab Makes Strides in Autonomous Discovery to Tackle the Data Deluge

August 2, 2021

Data production is outpacing the human capacity to process said data. Whether a giant radio telescope, a new particle accelerator or lidar data from autonomous cars, the sheer scale of the data generated is increasingly Read more…

Verifying the Universe with Exascale Computers

July 30, 2021

The ExaSky project, one of the critical Earth and Space Science applications being solved by the US Department of Energy’s (DOE’s) Exascale Computing Project (ECP), is preparing to use the nation’s forthcoming exas Read more…

What’s After Exascale? The Internet of Workflows Says HPE’s Nicolas Dubé

July 29, 2021

With the race to exascale computing in its final leg, it’s natural to wonder what the Post Exascale Era will look like. Nicolas Dubé, VP and chief technologist for HPE’s HPC business unit, agrees and shared his vision at Supercomputing Frontiers Europe 2021 held last week. The next big thing, he told the virtual audience at SFE21, is something that will connect HPC and (broadly) all of IT – into what Dubé calls The Internet of Workflows. Read more…

How UK Scientists Developed Transformative, HPC-Powered Coronavirus Sequencing System

July 29, 2021

In November 2020, the COVID-19 Genomics UK Consortium (COG-UK) won the HPCwire Readers’ Choice Award for Best HPC Collaboration for its CLIMB-COVID sequencing project. Launched in March 2020, CLIMB-COVID has now resulted in the sequencing of over 675,000 coronavirus genomes – an increasingly critical task as variants like Delta threaten the tenuous prospect of a return to normalcy in much of the world. Read more…

AWS Solution Channel

Data compression with increased performance and lower costs

Many customers associate a performance cost with data compression, but that’s not the case with Amazon FSx for Lustre. With FSx for Lustre, data compression reduces storage costs and increases aggregate file system throughput. Read more…

KAUST Leverages Mixed Precision for Geospatial Data

July 28, 2021

For many computationally intensive tasks, exacting precision is not necessary for every step of the entire task to obtain a suitably precise result. The alternative is mixed-precision computing: using high precision wher Read more…

Digging into the Atos-Nimbix Deal: Big US HPC and Global Cloud Aspirations. Look out HPE?

August 2, 2021

Behind Atos’s deal announced last week to acquire HPC-cloud specialist Nimbix are ramped-up plans to penetrate the U.S. HPC market and global expansion of its Read more…

How UK Scientists Developed Transformative, HPC-Powered Coronavirus Sequencing System

July 29, 2021

In November 2020, the COVID-19 Genomics UK Consortium (COG-UK) won the HPCwire Readers’ Choice Award for Best HPC Collaboration for its CLIMB-COVID sequencing project. Launched in March 2020, CLIMB-COVID has now resulted in the sequencing of over 675,000 coronavirus genomes – an increasingly critical task as variants like Delta threaten the tenuous prospect of a return to normalcy in much of the world. Read more…

What’s After Exascale? The Internet of Workflows Says HPE’s Nicolas Dubé

July 29, 2021

With the race to exascale computing in its final leg, it’s natural to wonder what the Post Exascale Era will look like. Nicolas Dubé, VP and chief technologist for HPE’s HPC business unit, agrees and shared his vision at Supercomputing Frontiers Europe 2021 held last week. The next big thing, he told the virtual audience at SFE21, is something that will connect HPC and (broadly) all of IT – into what Dubé calls The Internet of Workflows. Read more…

IBM and University of Tokyo Roll Out Quantum System One in Japan

July 27, 2021

IBM and the University of Tokyo today unveiled an IBM Quantum System One as part of the IBM-Japan quantum program announced in 2019. The system is the second IB Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

Will Approximation Drive Post-Moore’s Law HPC Gains?

July 26, 2021

“Hardware-based improvements are going to get more and more difficult,” said Neil Thompson, an innovation scholar at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). “I think that’s something that this crowd will probably, actually, be already familiar with.” Thompson, speaking... Read more…

With New Owner and New Roadmap, an Independent Omni-Path Is Staging a Comeback

July 23, 2021

Put on a shelf by Intel in 2019, Omni-Path faced a uncertain future, but under new custodian Cornelis Networks, OmniPath is looking to make a comeback as an independent high-performance interconnect solution. A "significant refresh" – called Omni-Path Express – is coming later this year according to the company. Cornelis Networks formed last September as a spinout of Intel's Omni-Path division. Read more…

Chameleon’s HPC Testbed Sharpens Its Edge, Presses ‘Replay’

July 22, 2021

“One way of saying what I do for a living is to say that I develop scientific instruments,” said Kate Keahey, a senior fellow at the University of Chicago a Read more…

AMD Chipmaker TSMC to Use AMD Chips for Chipmaking

May 8, 2021

TSMC has tapped AMD to support its major manufacturing and R&D workloads. AMD will provide its Epyc Rome 7702P CPUs – with 64 cores operating at a base cl Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

Leading Solution Providers

Contributors

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized in his keynote, Nvidia doesn’t plan to build... Read more…

Microsoft to Provide World’s Most Powerful Weather & Climate Supercomputer for UK’s Met Office

April 22, 2021

More than 14 months ago, the UK government announced plans to invest £1.2 billion ($1.56 billion) into weather and climate supercomputing, including procuremen Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Q&A with Jim Keller, CTO of Tenstorrent, and an HPCwire Person to Watch in 2021

April 22, 2021

As part of our HPCwire Person to Watch series, we are happy to present our interview with Jim Keller, president and chief technology officer of Tenstorrent. One of the top chip architects of our time, Keller has had an impactful career. Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Senate Debate on Bill to Remake NSF – the Endless Frontier Act – Begins

May 18, 2021

The U.S. Senate today opened floor debate on the Endless Frontier Act which seeks to remake and expand the National Science Foundation by creating a technology Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire