Blasts from the (Recent) Past and Hopes for the Future

By John Russell

December 23, 2019

What does 2020 look like to you? What did 2019 look like?

Lots happened but the main trends were carryovers from 2018 – AI messaging again blanketed everything; the roll-out of new big machines and exascale announcements continued; processor diversity and system disaggregation kicked up a notch; hyperscalers continued flexing their muscles (think AWS and its Graviton2 processor); and the U.S. and China continued their awkward trade war.

That’s hardly all. Quantum computing, though nicely-funded, remained a mystery to most. Intel got (back) into the GPU (Xe) business, AMD stayed surefooted (launched next-gen Epyc CPU on 7nm), and no one is quite sure what IBM’s next HPC move will be. TACC stood up Frontera (Dell), now the fastest academic computer in the world.

You get the idea. It was an eventful year…again.

For many participants 2019 presented a risky landscape in which the new normal (see trends above) started exacting tolls and delivering rewards. Amid the many swirling tornadoes of good and ill fortune dotting the HPC landscape there was a big winner and (perhaps) an unexpected loser. Presented here are a few highlights from the year, some thoughts about what 2019 portends, and links to a smattering of HPCwire articles as a quick 2019 rewind. Apologies for the many important trends/events omitted (surging Arm, Nvidia’s purchase of Mellanox, etc.)

  1. It was Good to Be Cray This Year

Cray had a phenomenal transformational year.

Dogged for years by questions about whether a stand-alone supercomputer company could survive in such a narrow, boom-bust market, Cray is thriving and no longer standing alone. Thank you HPE et al. Many questions remain following HPE’s $1.3B purchase of Cray in May but no matter how you look at, 2019 was Cray’s year.

As they say, to the victor goes the spoils:

  • Exascale Trifecta. Cray swept the exascale sweepstakes winning all three procurements (Aurora, with Intel at Argonne National Laboratory; Frontier with AMD at Oak Ridge National Laboratory; and El Capitan at Lawrence Livermore National Laboratory).
  • Shasta & Slingshot. Successful roll-out of Cray’s new system architecture first announced in late 2018. This was the big bet upon which all else, or almost all else, rests. The company declared its product portfolio refresh complete and exascale era ready in October.
  • AMD and Arm. Cray seems to be a full participant in the burgeoning of processor diversity. Case-in-point: Its new collaboration with Fujitsu to develop a commercial supercomputer powered by the Fujitsu A64FX Arm-based processor, the same chip going into the post-K “Fugaku” supercomputer. It also has significant experience using AMD processors.
  • Welcome to HPE. Fresh from gobbling SGI ($275M, ’16), HPE should be a good home for Cray, which will boost HPE’s ability to pursue high-end procurements and potentially speed the combined company’s development of next-generation technologies. HPE CEO Antonio Neri sizes the supercomputing/exascale sector at between $2.5 to $5 billion, while the sub-supercomputing HPC sector at $8.5 billion. Cray’s HPC storage business is another plus.

Just wow.

Cray’s heritage, of course, stretches back to 1972 when it was founded by Seymour Cray. The company has a leadership position in the top 100 supercomputer installations around the globe and is one of only a handful of companies capable of building these world-class supercomputers. Headquartered in Seattle, Cray has roughly1,300 employees (at time of purchase) and reported revenue of $456 million in its most recent fiscal year, up 16 percent year over year.

It seems a reasonable guess that Cray’s good fortune was more than just chance. Given the HPE investment (price was 3X revenues, 20X earnings), DoE’s exascale procurement investments, and Cray’s stature in US supercomputing amid global tensions, it’s likely many forces, mutually-aware, helped coax the deal forward. In any case, it’s good for Cray and for HPC.

Antonio Neri, HPE CEO

HPE CEO Antonio Neri has said Cray will continue as an entity and brand with HPE. Pete Ungaro, Cray’s former CEO now becomes SVP & GM, for HPC and AI at HPE. Lots of eyes will be on Neri and Ungaro as HPE moves forward. Will there be a senior leadership shakeout or can Nero get his talented senior team to work together in ways that make sense? Absorbing SGI seemed to go well although the brand seemed to vanish once inside.

At HPCwire we have been wondering about what the new HPE strategy will be, what will become the broader HPE technology and product roadmap, etc.? Stay tuned.

CRAY HIGH POINTS

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

Cray, AMD to Extend DOE’s Exascale Frontier

HPE to Acquire Cray for $1.3B

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio

 

  1. Is the Top500 Finally Topping Out?

My pick for the biggest steam loser in 2019 may surprise. It’s the Top500 and maybe the occasional loss of steam is just part of the natural cycle. The November ’19 list was a bit of a yawn and perhaps not especially accurate. Summit (148 PF, Rmax) and Sierra (96 PF Rmax) and remained at the top. China’s Sunway (93 PF Rmax) and Tihanne-2A (61 PF Rmax) retained third and fourth. However there were reports of systems from China that ran the Linpack benchmark and submitted results that would have put them atop the list but later withdrew them in attempt avoid additional blacklisting by the US.

It almost doesn’t matter. Not the trade war – that matters. However handicapping world computer progress and leadership based on performance on the Top500 seems almost passé as a showcase for startlingly new technologies. It is still a rich list with lots to learn from it but whether LINPACK remains a good metric or whether the systems entered are really comparable or whether taking the top honors is worth the effort expended are tougher questions today. It would be interesting to get IBM’s candid take on the question given IBM’s success with Summit and Sierra on the Top500 but its less successful effort to turn smaller Summit look-alikes into broad commercial (system or processor) traction. Designing and standing up these giants isn’t trivial or cheap.

The list isn’t going away. We love lists. They have value. But the investment-reward ratio and now, potentially questionable bragging rights, undermine the Top500’s value as anointer of the top dog in supercomputing. In some ways, the secondary lists (Green500 and HPCG) are more interesting and crowding the spotlight. This is hardly a new gripe (mea culpa) but the critical mass of opinion may be shifting away from the value of the Top500. There was distinctly less buzz at SC this year around the latest list.

TOP500 TOUCHPOINTS

Top500: US Maintains Performance Lead; Arm Tops Green500

Top500 Purely Petaflops; US Maintains Performance Lead

 

  1. AI in Science – the Next Exascale Initiative?

This year virtually all the major systems makers offered HPC-AI-aimed solutions – typically with one or two ‘supervisor’ CPUs and 4-8 accelerators. There are variations. Established chipmakers worked to beef up memory bandwidth, IO performance, and mixed precision capabilities. Multi-die packaging including the use of die with varying feature sizes in the same package started to take hold. Overall, these were continuations of existing trends with a more definite distinction emerging between training and inferencing platforms. One can see a menu of AI inference chips targeting specific applications coming.

On balance, the AI marketing drumbeat was even louder and more pervasive than last year.

Slide from Cerebras emphasizing its wafer-scale size ambitions

More interesting were 1) efforts by the science community to start shaping a larger strategy to fuse AI with HPC and leveraging the synergy, and 2) the flurry of AI chips in various stages of readiness coming from start-ups (Graphcore, Cerebras, NovuMind, Wave Computing, Cambricon, etc). Many of these potential disrupters will find buyers. Intel, of course, just snapped up Habana Labs for $2B.

Let’s look at the new U.S. AI Initiative signed in September and ramping efforts to define a science strategy. The Department of Energy has held a series of AI for Science town halls led by Kathy Yelick (Lawrence Berkeley National Laboratory), Jeff Nichols (Oak Ridge National Laboratory), and Rick Stevens (Argonne National Laboratory) seeking input from a broad science constituency from academia, government, and industry. In theory, this could lead to a funded AI program modeled loosely on the current Exascale Initiative.

Their formal report was due by the end of 2019 but has been pushed slightly into January when we’ll get the first glimpse of the recommendations which are likely to encompass hardware, software, and application areas.

Stevens told HPCwire, “Clearly there’s huge progress in the internet space, but those Facebooks and Googles and Microsofts and Amazons and so on, those guys are not going to be the primary drivers for AI in areas like high-energy physics or nuclear energy or wind power or new materials for solar or for cancer research – it’s not their business focus. We recognize that the challenge is how to leverage the investments made by the private sector to build on those [advances] to add what’s missing for scientific applications — and there’s lots of things missing. And then figure out what the computing community has to do to position the infrastructure and our investments in software and algorithms and math and so on to bring the AI opportunity closer to where we currently are.”

Meanwhile test beds are sprouting up in various national labs and NDAs are being signed with most of the new AI chip crowd. ANL is a good example.

Rick Stevens, Argonne National Laboratory

“We’re setting up an AI accelerator test bed [and] it’s going to be open to the whole community. The accelerator market is filled with companies, right. Our intent is to populate the test bed with many of these as we can get working hardware from. So this one’s [Cerebras chip] a slightly different situation. It’s not nominally aimed at the test bed, but it’s actually a working system for us to do our hardest AI problems on.”

After all the ‘AI’ groundwork done by hyperscalers and enterprise community it will be fascinating to watch what contributions the science community makes.

Is AI the new Exascale? We’ll see.

AI LOOK-BACK

AI for Science Town Hall Series Kicks off at Argonne

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

Trump Administration and NIST Issue AI Standards Development Plan

 

  1. IBM & Intel – Two Giants with Giant Challenges

That Intel and IBM are large impressive companies with formidable reach is a given. It’s a massive mistake to underestimate their strengths or to dismiss them. But stuff happens. Markets shift. Technologies plateau. Longtime rivals and upstart competitors all clamber for a share of the pie. Both Intel and IBM are in the midst of massive pivots that encompass their HPC and other activities.

Robert Swan, Intel CEO (Credit: Intel Corporation)

First, Intel. For decades it has been king of the microprocessor market – leveraging design and manufacturing technology leadership to claim a mid-to-high 90s percent market share in CPUs along with an extensive portfolio of other semiconductor products. The decline in Moore’s Law, the rise of heterogeneous computing architectures, product and process missteps (e.g. KNL and OmniPath, both discontinued, Lustre stewardship now ended), along with reinvigorated rivals (principally AMD and recently Arm) have sent shock waves through the company.

You may recall Bob Swan was elevated from interim to permanent CEO last January. Speaking at the Credit Suisse technology conference this December, Swan said he wants change. This quote is from Wccftech:

“We think about having 30 percent share in a $230 [silicon] TAM that we think is going to grow to $300B [silicon] TAM over the next four years. And frankly, I’m trying to destroy the thinking about having 90 percent (CPU market) share inside our company because I think it limits our thinking, I think we miss technology transitions. We miss opportunities because we’re in some ways pre-occupied with protecting 90 instead of seeing a much bigger market with much more innovation going on, both Inside our four walls and outside our four walls.

“So we come to work in the morning with a 30 percent share, with every expectation over the next several years that we will play a larger and larger role in our customers’ success, and that doesn’t just (mean) CPUs. It means GPUs, it means Al, it does mean FPGAs, it means bringing these technologies together so we’re solving customers’ problems. So we’re looking at a company with roughly 30 percent share in a $288 silicon TAM, not CPU TAM but silicon TAM. We look at the investments we’ve been making over the last several years in these kind of key technology inflections: 5G, AI, autonomous, acquisitions, including Altera, that we think is more and more relevant both in the cloud but also AI the network and at the edge.”

There have been key executive changes. Rajeeb Hazra, corporate VP of Intel’s Data Center Group and GM for the Enterprise and Government Group, is retiring and his replacement has not yet been named. Gary Patton is leaving GlobalFoundaries where he was CTO and R&D SVP to become corporate Intel VP and GM of design enablement reporting to CTO Michael Mayberry.

Don’t count Intel out. It has enormous technical and financial capital. At SC19, Intel debuted its new XeGPU line with Ponte Vecchio as the top SKU aimed at HPC. There are plans for many variants aimed at different AI applications with the first parts expected to market this summer in a consumer application. Ponte Vecchio will be in Aurora. Intel’s Optane persistent memory product line is showing early traction. Of course, the Xeon CPU family still dominates the landscape – despite process hiccups; on the minus side AMD is now aiming for double digit market share and Arm is mounting a surge into the datacenter.

So the news on the product front is mixed.

That said, Intel is getting good marks for playing nicely with collaborators for its efforts on OpenHPC, the plug-and-play stack it championed that is now reasonably well established, and the more nascent Compute Express Link (CXL) CPU-to-device interconnect consortium. How oneAPI fares will be interesting to watch. Intel has a lot riding on it for its GPU line and presumably oneAPI will be how one ports aps to it.

Intel has its fingers in so many pies that foreseeing the company’s trajectory is no easy task.

INTEL 2019 HITS

Intel Debuts New GPU – Ponte Vecchio – and Outlines oneAPI

Intel Launches Cascade Lake Xeons with Up to 56 Cores

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product

At Hot Chips, Intel Shows New Nervana AI Training, Inference Chips

 

IBM’s challenges are somewhat different. Its gigantic $34B purchase of Red Hat, which closed in July, seems to be working as IBM seeks to embrace all things cloud and many things open source and Linux. The new question, really, is how does HPC fit into IBM’s evolving worldview and strategic plan.

Big Blue made massive bets here in the past. The latest chapter roughly starts with its decision to get out of the x86 business by selling it to Lenovo (servers, ‘13, PCs in ‘05). It gambled on the IBM Power microprocessor line and leading the OpenPOWER Foundation (with Nvidia, Mellanox, Tyan, and Google.) The intent was to create an alternative to the x86 ecosystem.

Where Intel was playing a closed-architecture, one-size fits all game, argued IBM, it would take a more collaborative and open approach. No doubt Intel would dispute that characterization. At SC15 long-time executive IBM Ken King argued the Power/OpenPOWER upside was huge, citing an ambitious 20-30 percent market share target.

IBM Power9 AC922 rendering

By SC16 many of the pieces were in place – OpenPOWER had 250-plus members, Power8+ with NVLink technology was out, work on OpenCAPI had started, IBM had launched the Minsky server with Power8, and Google/Rackspace announced plans for a Power9-based server supporting OCP. A confident King told HPCwire at SC16, “This year (2017) is about scale. We’re [IBM/OpenPOWER] only going to be effective if we get to 10-15-20 percent of the Linux socket market. Being at one or two percent won’t [do it].”

Skip a year and fast forward to ISC 2018 when Summit, the IBM-built supercomputer for the CORAL program, regained the top spot on the Top500 for the U.S. It was a stunning achievement. Summit has held the Top500 crown on the last four editions of the list and it is churning out impactful science.

Problem is, the market traction needed never adequately materialized. We won’t go into all the reasons but cost, effort to port aps, at least early on, and competitor pricing were all factors. Also the rise of accelerators (GPUs mostly) made CPUs look a bit mundane – not a good thing given the development costs needed to keep advancing the Power CPU line. Likewise AI, a persistent rumor but hardly a thunderous echo when IBM made its Power/OpenPOWER bet, burst onto the scene with unexpected force. All this while IBM’s cloud effort lagged its hopes and garnered attention in the C-suite.

This year at SC19 IBM introduced no new Power-based systems and provided no update on Power10 chip plans. The OpenPOWER Foundation has been moved to the Linux Foundation. IBM didn’t win any of the exascale awards; this shocked many observers (re: Summit’s success). Lastly, longtime IBM Dave Turek, now vice president of high performance and cognitive computing, described a startlingly new IBM HPC-AI strategy that sounded unlike typical IBM practice. It emphasizes selling small systems – as small as a single node, said Turek – to a much larger customer base. It’s based on using IBM AI software to analyze host infrastructure and application performance and to speed those efforts with minimum intrusion on the customer’s existing infrastructure.

This is from a Turek interview with HPCwire:

Dave Turek, IBM

“We’re trying to do strategically is get away from this rip and replace phenomenon that’s characterized HPC since the beginning of time…So we take a solution. It’s a small Power cluster. It’s got the Bayesian software on it. You have an existing, I don’t know, a Haswell cluster, a few years old, running simulations and because your last dose of capital from your enterprise was five years ago, you can’t get another bunch of money. What do you do?

“What we would do is bring in our cluster, put it in the datacenter, and bring up a database and give access to that database from both sides. So [a] simulation runs, it puts output parameters into the database. I know something’s arrived, I go and inspect that, my machine learning algorithms analyze that, and it makes recommendations of the input parameters. So next simulation, rinse and repeat. And as you progress through this, the Bayesian machine learning solution gets smarter and smarter and smarter. And you get to get to your end game quicker and quicker and quicker.”

The results, he says, can be game-changing: “[As a customer] I put it in a four-node cluster adjacent to my 2,000-node cluster, and I make my 2,000-node cluster behave as if it was an 8,000-node cluster? How long do I have to think about this?” You get the picture.

Many wonder if more IBM changes aren’t ahead and what role HPC will have in Big Blue’s future.

IBM’S CHANGING PERSPECTIVE

SC19: IBM Changes Its HPC-AI Game Plan

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

IBM Deepens Plunge into Open Source; OpenPOWER to Join Linux Foundation

 

  1. Quantum – The Haze is a Little Clearer but Solutions Not Nearer

What would technology be without a few kerfuffles? QC had a dandy around Google’s claim for achieving quantum supremacy to which IBM took vigorous exception and which spawned a tart response on the anonymous Twitter feed Quantum Bullshit Detector. Launched last spring QBD is generally on the lookout for what it detects as folly in QC. No doubt there’s lots to detect in QC. There’s an interesting overview of QBD written by Sophia Chen in Wired.

Google’s Sycamore quantum chip

Here’s a quick reprise of Google’s quantum supremacy claim, published in Nature: Google reported it was able to perform a task (a particular random number generation) using its 53-qubit processor, Sycamore, in 200 seconds versus what would take on the order 10,000 years on a supercomputer. The authors “estimated the classical computational cost” of running the supremacy circuits with simulations on Summit and on a large Google cluster. IBM demurred and argued it discovered a way to do the same task in two days (still hardly 200 seconds) on Summit.

In a sense, who cares. Many argue quantum supremacy is a silly idea to start with. Maybe. Practically, the Google engineering work was important, even if it didn’t constitute achieving quantum supremacy. Let’s move on.

Last year the haze around quantum computing thickened rather than thinned from the previous year. Many feel the smoky scene is even worse now. But maybe not. Last year we were expecting clearer answers. This year we know better. Here’s why:

  • Are we there yet? No. Actually, definitively no. There’s broad public agreement from virtually all the key players (IBM, Rigetti, Google, Microsoft, D-Wave) that practical applications lie years, many years, ahead. Leave aside the inevitable hype generated by government attention & funding (i.e. the $1.25B National Quantum Initiative Act signed a year ago). Jim Clarke, who leads Intel’s effort, says eight years is a good bet for reaching Quantum Advantage – the time QC is able to do something sufficiently better than classical computers to warrant switching. He may be optimistic.
  • The problem – We love the mystery but don’t understand it. Quantum computing is inherently mysterious and therefore fascinating. But most of the public announcements, including growing qubit counts, don’t really mean much to most us and as importantly don’t mean much in terms catapulting QC forward. Even as very coarse progress milestones the litany of papers and new larger systems and collaborations doesn’t tell us much yet.
A rendering of IBM Q System One

Just this week, IBM announced a three-year agreement with Japan, led by the University of Tokyo, to foster QC development. This is the third such IBM international agreement. Are they important? Yes. Will there be practical quantum computing at the end of the first three years of the last IBM-Japan effort? No. In September, D-Wave revealed the named of its forthcoming 5000-qubit quantum annealing system, Advantage, picked to emulate the idea of Quantum Advantage. Will it deliver?

I love this comment from John Martinis, head of Google’s comment on QC (semiconductor-based superconducting) tech’s challenge: “Breaking RSA is going to take, let’s say, 100 million physical qubits. And you know, right now we’re at what is it? 53. So, that’s going to take a few years.” Indeed, Google’s 54-qubit Sycamore chip actually functioned as a 53-qubit device during the supremacy exercise because one of the control wires broke.

What’s clearer today than last year, and more publicly agreed to by most of the QC community, is there won’t be some sudden breakthrough that makes quantum computing a practical tool soon. There’s lots of interesting, important work happening. Hyperscalers are getting more involved a la the AWS three-prong effort (portal for third party QC tech & services; hardware research collaboration; consulting effort to ID potential aps). Intel’s new cryo-controller chip is also interesting. Maybe it will become a component supplier to the QC world. S/W tools are edging forward. But…

Point is QC is far from ready – we should watch it, not with a jaded eye but a patient eye that screens out hype. There’s great QC technology being developed by very many organizations along what will be a long journey.

QUANTUM NOTABLES

Google Goes Public with Quantum Supremacy Achievement; IBM Disagrees

IBM Opens Quantum Computing Center; Announces 53-Qubit

IBM Pitches Quantum Volume as Benchmarking Tool for Gate-based Quantum Computers

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

 

  1. Bits and Pieces from around the HPC Community

MLPerf, the AI benchmarking effort gaining traction, issued an inferencing suite to go with its training exercises; the first inferencing results came out in November with Nvidia claiming a good showing. Check them out. Has anyone heard more on Deep500 – after holding a well-attended session at SC18 to discuss formative ideas there was no sign of it at SC19. Deep500’s intent is sort of evident from the title, create an AI benchmarking tool and competition spanning small to very big systems.

Here’s a well-earned kudo. Robert Dennard won SIA’s Robert Noyce Award last year. He is, of course, the father of Dennard Scaling, which sadly has run its course, and perhaps more importantly, the DRAM. Here’s a link to a nice tribute. Dennard has had great impact on our industry.

As always, there were significant personnel changes – the departure of Rajeeb Hazra, longtime HPC exec at Intel is one. It will be interesting to see what he does next. Barry Bolding, a former CSO at Cray, joined AWS as director, global HPC. Someone HPCwire has leaned for insight about life sciences in HPC, Ari Berman, was promoted to CEO of BioTeam consulting – his team advised on the latest design of Biowulf, NIH’s now 20-year-old constantly evolving HPC system. Gina Tourassi was appointed as director of the National Center for Computational Sciences, ORNL. Congrats! Trump announced plans to nominate Sethuraman “Panch” Panchanathan to serve as the 15th director of the NSF

Quick look at market numbers. HPC server sales in the first half of 2019 year totaled $6.7 billion, while 2018 sales grew 15 percent overall according to Hyperion. That’s a strong picture of health. Maybe more impressive (or scary) is that 11 hyperscalers spent more than $1 billion apiece on IT infrastructure in 2018; three spent more than $5 billion and one, Google, broke the $10 billion spend barrier, according to Intersect360 Research. The concentration of buying power with the cloud community is astonishing.

HPE’s Spaceborne computer returned home after 615 days on the International Space Station. It was a 1 TFlops system built with OTS parts to see if they could withstand the radiation. It did in the sense that although error rates were higher than normal on the ground, they were manageable and the system was able to do real work. Nvidia didn’t launch any new monster-size GPUs but it gobbled up interconnect (InfiniBand and Ethernet) specialist Mellanox for $6.9 billion (deal hasn’t closed yet). Nvidia research chief, William Daly, talked about the company’s R&D strategy at GTC19 – perhaps not surprisingly productizing is the central tenet.

Here’s good closing note. Venerable Titan Supercomputer was retired on August 1st. Housed at OLCF, Titan ranked as one of the world’s top 10 fastest supercomputers from its debut as No. 1 in 2012 until June 2019. During that time, Titan delivered more than 26 billion core hours of computing time. When launched, it represented a new approach that combined 18,688 AMD 16-core Opteron CPUs with 18,688 Nvidia Kepler K20 GPUs. OLCF Program Director Buddy Bland recalls, “Choosing a GPU-accelerated system was considered a risky choice.” Job well-done.

Happy holidays to all.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UT Dallas Grows HPC Storage Footprint for Animation and Game Development

October 28, 2020

Computer-generated animation and video game development are extraordinarily computationally intensive fields, with studios often requiring large server farms with hundreds of terabytes – or even petabytes – of storag Read more…

By Staff report

Frame by Frame, Supercomputing Reveals the Forms of the Coronavirus

October 27, 2020

From the start of the pandemic, supercomputing research has been targeting one particular protein of the coronavirus: the notorious “S” or “spike” protein, which allows the virus to pry its way into human cells a Read more…

By Oliver Peckham

AMD Reports Record Revenue and $35B Deal to Buy Xilinx

October 27, 2020

AMD this morning reported record quarterly revenue of $2.8 billion and a finalized deal to buy FPGA-maker Xilinx for $35 billion in an all-stock transaction. The acquisition helps AMD keep pace during a time of consolida Read more…

By John Russell

Nvidia-Arm Deal a Boon for RISC-V?

October 26, 2020

The $40 billion blockbuster acquisition deal that will bring chip maker Arm into the Nvidia corporate family could provide a boost for the competing RISC-V architecture. As regulators in the U.S., China and the Europe Read more…

By George Leopold

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a reference collection of open-source HPC software components and bes Read more…

By John Russell

AWS Solution Channel

Rapid Chip Design in the Cloud

Time-to-market and engineering efficiency are the most critical and expensive metrics for a chip design company. With this in mind, the team at Annapurna Labs selected Altair AcceleratorRead more…

Intel® HPC + AI Pavilion

Berlin Institute of Health: Putting HPC to Work for the World

Researchers from the Center for Digital Health at the Berlin Institute of Health (BIH) are using science to understand the pathophysiology of COVID-19, which can help to inform the development of targeted treatments. Read more…

NASA Uses Supercomputing to Measure Carbon in the World’s Trees

October 22, 2020

Trees constitute one of the world’s most important carbon sinks, pulling enormous amounts of carbon dioxide from the atmosphere and storing the carbon in their trunks and the surrounding soil. Measuring this carbon sto Read more…

By Oliver Peckham

AMD Reports Record Revenue and $35B Deal to Buy Xilinx

October 27, 2020

AMD this morning reported record quarterly revenue of $2.8 billion and a finalized deal to buy FPGA-maker Xilinx for $35 billion in an all-stock transaction. Th Read more…

By John Russell

OpenHPC Progress Report – v2.0, More Recipes, Cloud and Arm Support, Says Schulz

October 26, 2020

Launched in late 2015 and transitioned to a Linux Foundation Project in 2016, OpenHPC has marched quietly but steadily forward. Its goal “to provide a referen Read more…

By John Russell

Nvidia Dominates (Again) Latest MLPerf Inference Results

October 22, 2020

The two-year-old AI benchmarking group MLPerf.org released its second set of inferencing results yesterday and again, as in the most recent MLPerf training resu Read more…

By John Russell

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

HPE to Build Australia’s Most Powerful Supercomputer for Pawsey

October 20, 2020

The Pawsey Supercomputing Centre in Perth, Western Australia, has had a busy year. Pawsey typically spends much of its time looking to the stars, working with a Read more…

By Oliver Peckham

DDN-Tintri Showcases Technology Integration with Two New Products

October 20, 2020

DDN, a long-time leader in HPC storage, announced two new products today and provided more detail around its strategy for integrating DDN HPC technologies with Read more…

By John Russell

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Google Hires Longtime Intel Exec Bill Magro to Lead HPC Strategy

September 18, 2020

In a sign of the times, another prominent HPCer has made a move to a hyperscaler. Longtime Intel executive Bill Magro joined Google as chief technologist for hi Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Aurora’s Troubles Move Frontier into Pole Exascale Position

October 1, 2020

Intel’s 7nm node delay has raised questions about the status of the Aurora supercomputer that was scheduled to be stood up at Argonne National Laboratory next year. Aurora was in the running to be the United States’ first exascale supercomputer although it was on a contemporaneous timeline with... Read more…

By Tiffany Trader

Is the Nvidia A100 GPU Performance Worth a Hardware Upgrade?

October 16, 2020

Over the last decade, accelerators have seen an increasing rate of adoption in high-performance computing (HPC) platforms, and in the June 2020 Top500 list, eig Read more…

By Hartwig Anzt, Ahmad Abdelfattah and Jack Dongarra

Leading Solution Providers

Contributors

European Commission Declares €8 Billion Investment in Supercomputing

September 18, 2020

Just under two years ago, the European Commission formalized the EuroHPC Joint Undertaking (JU): a concerted HPC effort (comprising 32 participating states at c Read more…

By Oliver Peckham

Nvidia and EuroHPC Team for Four Supercomputers, Including Massive ‘Leonardo’ System

October 15, 2020

The EuroHPC Joint Undertaking (JU) serves as Europe’s concerted supercomputing play, currently comprising 32 member states and billions of euros in funding. I Read more…

By Oliver Peckham

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

Microsoft Azure Adds A100 GPU Instances for ‘Supercomputer-Class AI’ in the Cloud

August 19, 2020

Microsoft Azure continues to infuse its cloud platform with HPC- and AI-directed technologies. Today the cloud services purveyor announced a new virtual machine Read more…

By Tiffany Trader

Oracle Cloud Infrastructure Powers Fugaku’s Storage, Scores IO500 Win

August 28, 2020

In June, RIKEN shook the supercomputing world with its Arm-based, Fujitsu-built juggernaut: Fugaku. The system, which weighs in at 415.5 Linpack petaflops, topp Read more…

By Oliver Peckham

HPE, AMD and EuroHPC Partner for Pre-Exascale LUMI Supercomputer

October 21, 2020

Not even a week after Nvidia announced that it would be providing hardware for the first four of the eight planned EuroHPC systems, HPE and AMD are announcing a Read more…

By Oliver Peckham

DOD Orders Two AI-Focused Supercomputers from Liqid

August 24, 2020

The U.S. Department of Defense is making a big investment in data analytics and AI computing with the procurement of two HPC systems that will provide the High Read more…

By Tiffany Trader

Oracle Cloud Deepens HPC Embrace with Launch of A100 Instances, Plans for Arm, More 

September 22, 2020

Oracle Cloud Infrastructure (OCI) continued its steady ramp-up of HPC capabilities today with a flurry of announcements. Topping the list is general availabilit Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This