Blasts from the (Recent) Past and Hopes for the Future

By John Russell

December 23, 2019

What does 2020 look like to you? What did 2019 look like?

Lots happened but the main trends were carryovers from 2018 – AI messaging again blanketed everything; the roll-out of new big machines and exascale announcements continued; processor diversity and system disaggregation kicked up a notch; hyperscalers continued flexing their muscles (think AWS and its Graviton2 processor); and the U.S. and China continued their awkward trade war.

That’s hardly all. Quantum computing, though nicely-funded, remained a mystery to most. Intel got (back) into the GPU (Xe) business, AMD stayed surefooted (launched next-gen Epyc CPU on 7nm), and no one is quite sure what IBM’s next HPC move will be. TACC stood up Frontera (Dell), now the fastest academic computer in the world.

You get the idea. It was an eventful year…again.

For many participants 2019 presented a risky landscape in which the new normal (see trends above) started exacting tolls and delivering rewards. Amid the many swirling tornadoes of good and ill fortune dotting the HPC landscape there was a big winner and (perhaps) an unexpected loser. Presented here are a few highlights from the year, some thoughts about what 2019 portends, and links to a smattering of HPCwire articles as a quick 2019 rewind. Apologies for the many important trends/events omitted (surging Arm, Nvidia’s purchase of Mellanox, etc.)

  1. It was Good to Be Cray This Year

Cray had a phenomenal transformational year.

Dogged for years by questions about whether a stand-alone supercomputer company could survive in such a narrow, boom-bust market, Cray is thriving and no longer standing alone. Thank you HPE et al. Many questions remain following HPE’s $1.3B purchase of Cray in May but no matter how you look at, 2019 was Cray’s year.

As they say, to the victor goes the spoils:

  • Exascale Trifecta. Cray swept the exascale sweepstakes winning all three procurements (Aurora, with Intel at Argonne National Laboratory; Frontier with AMD at Oak Ridge National Laboratory; and El Capitan at Lawrence Livermore National Laboratory).
  • Shasta & Slingshot. Successful roll-out of Cray’s new system architecture first announced in late 2018. This was the big bet upon which all else, or almost all else, rests. The company declared its product portfolio refresh complete and exascale era ready in October.
  • AMD and Arm. Cray seems to be a full participant in the burgeoning of processor diversity. Case-in-point: Its new collaboration with Fujitsu to develop a commercial supercomputer powered by the Fujitsu A64FX Arm-based processor, the same chip going into the post-K “Fugaku” supercomputer. It also has significant experience using AMD processors.
  • Welcome to HPE. Fresh from gobbling SGI ($275M, ’16), HPE should be a good home for Cray, which will boost HPE’s ability to pursue high-end procurements and potentially speed the combined company’s development of next-generation technologies. HPE CEO Antonio Neri sizes the supercomputing/exascale sector at between $2.5 to $5 billion, while the sub-supercomputing HPC sector at $8.5 billion. Cray’s HPC storage business is another plus.

Just wow.

Cray’s heritage, of course, stretches back to 1972 when it was founded by Seymour Cray. The company has a leadership position in the top 100 supercomputer installations around the globe and is one of only a handful of companies capable of building these world-class supercomputers. Headquartered in Seattle, Cray has roughly1,300 employees (at time of purchase) and reported revenue of $456 million in its most recent fiscal year, up 16 percent year over year.

It seems a reasonable guess that Cray’s good fortune was more than just chance. Given the HPE investment (price was 3X revenues, 20X earnings), DoE’s exascale procurement investments, and Cray’s stature in US supercomputing amid global tensions, it’s likely many forces, mutually-aware, helped coax the deal forward. In any case, it’s good for Cray and for HPC.

Antonio Neri, HPE CEO

HPE CEO Antonio Neri has said Cray will continue as an entity and brand with HPE. Pete Ungaro, Cray’s former CEO now becomes SVP & GM, for HPC and AI at HPE. Lots of eyes will be on Neri and Ungaro as HPE moves forward. Will there be a senior leadership shakeout or can Nero get his talented senior team to work together in ways that make sense? Absorbing SGI seemed to go well although the brand seemed to vanish once inside.

At HPCwire we have been wondering about what the new HPE strategy will be, what will become the broader HPE technology and product roadmap, etc.? Stay tuned.

CRAY HIGH POINTS

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

Cray, AMD to Extend DOE’s Exascale Frontier

HPE to Acquire Cray for $1.3B

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio

 

  1. Is the Top500 Finally Topping Out?

My pick for the biggest steam loser in 2019 may surprise. It’s the Top500 and maybe the occasional loss of steam is just part of the natural cycle. The November ’19 list was a bit of a yawn and perhaps not especially accurate. Summit (148 PF, Rmax) and Sierra (96 PF Rmax) and remained at the top. China’s Sunway (93 PF Rmax) and Tihanne-2A (61 PF Rmax) retained third and fourth. However there were reports of systems from China that ran the Linpack benchmark and submitted results that would have put them atop the list but later withdrew them in attempt avoid additional blacklisting by the US.

It almost doesn’t matter. Not the trade war – that matters. However handicapping world computer progress and leadership based on performance on the Top500 seems almost passé as a showcase for startlingly new technologies. It is still a rich list with lots to learn from it but whether LINPACK remains a good metric or whether the systems entered are really comparable or whether taking the top honors is worth the effort expended are tougher questions today. It would be interesting to get IBM’s candid take on the question given IBM’s success with Summit and Sierra on the Top500 but its less successful effort to turn smaller Summit look-alikes into broad commercial (system or processor) traction. Designing and standing up these giants isn’t trivial or cheap.

The list isn’t going away. We love lists. They have value. But the investment-reward ratio and now, potentially questionable bragging rights, undermine the Top500’s value as anointer of the top dog in supercomputing. In some ways, the secondary lists (Green500 and HPCG) are more interesting and crowding the spotlight. This is hardly a new gripe (mea culpa) but the critical mass of opinion may be shifting away from the value of the Top500. There was distinctly less buzz at SC this year around the latest list.

TOP500 TOUCHPOINTS

Top500: US Maintains Performance Lead; Arm Tops Green500

Top500 Purely Petaflops; US Maintains Performance Lead

 

  1. AI in Science – the Next Exascale Initiative?

This year virtually all the major systems makers offered HPC-AI-aimed solutions – typically with one or two ‘supervisor’ CPUs and 4-8 accelerators. There are variations. Established chipmakers worked to beef up memory bandwidth, IO performance, and mixed precision capabilities. Multi-die packaging including the use of die with varying feature sizes in the same package started to take hold. Overall, these were continuations of existing trends with a more definite distinction emerging between training and inferencing platforms. One can see a menu of AI inference chips targeting specific applications coming.

On balance, the AI marketing drumbeat was even louder and more pervasive than last year.

Slide from Cerebras emphasizing its wafer-scale size ambitions

More interesting were 1) efforts by the science community to start shaping a larger strategy to fuse AI with HPC and leveraging the synergy, and 2) the flurry of AI chips in various stages of readiness coming from start-ups (Graphcore, Cerebras, NovuMind, Wave Computing, Cambricon, etc). Many of these potential disrupters will find buyers. Intel, of course, just snapped up Habana Labs for $2B.

Let’s look at the new U.S. AI Initiative signed in September and ramping efforts to define a science strategy. The Department of Energy has held a series of AI for Science town halls led by Kathy Yelick (Lawrence Berkeley National Laboratory), Jeff Nichols (Oak Ridge National Laboratory), and Rick Stevens (Argonne National Laboratory) seeking input from a broad science constituency from academia, government, and industry. In theory, this could lead to a funded AI program modeled loosely on the current Exascale Initiative.

Their formal report was due by the end of 2019 but has been pushed slightly into January when we’ll get the first glimpse of the recommendations which are likely to encompass hardware, software, and application areas.

Stevens told HPCwire, “Clearly there’s huge progress in the internet space, but those Facebooks and Googles and Microsofts and Amazons and so on, those guys are not going to be the primary drivers for AI in areas like high-energy physics or nuclear energy or wind power or new materials for solar or for cancer research – it’s not their business focus. We recognize that the challenge is how to leverage the investments made by the private sector to build on those [advances] to add what’s missing for scientific applications — and there’s lots of things missing. And then figure out what the computing community has to do to position the infrastructure and our investments in software and algorithms and math and so on to bring the AI opportunity closer to where we currently are.”

Meanwhile test beds are sprouting up in various national labs and NDAs are being signed with most of the new AI chip crowd. ANL is a good example.

Rick Stevens, Argonne National Laboratory

“We’re setting up an AI accelerator test bed [and] it’s going to be open to the whole community. The accelerator market is filled with companies, right. Our intent is to populate the test bed with many of these as we can get working hardware from. So this one’s [Cerebras chip] a slightly different situation. It’s not nominally aimed at the test bed, but it’s actually a working system for us to do our hardest AI problems on.”

After all the ‘AI’ groundwork done by hyperscalers and enterprise community it will be fascinating to watch what contributions the science community makes.

Is AI the new Exascale? We’ll see.

AI LOOK-BACK

AI for Science Town Hall Series Kicks off at Argonne

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

Trump Administration and NIST Issue AI Standards Development Plan

 

  1. IBM & Intel – Two Giants with Giant Challenges

That Intel and IBM are large impressive companies with formidable reach is a given. It’s a massive mistake to underestimate their strengths or to dismiss them. But stuff happens. Markets shift. Technologies plateau. Longtime rivals and upstart competitors all clamber for a share of the pie. Both Intel and IBM are in the midst of massive pivots that encompass their HPC and other activities.

Robert Swan, Intel CEO (Credit: Intel Corporation)

First, Intel. For decades it has been king of the microprocessor market – leveraging design and manufacturing technology leadership to claim a mid-to-high 90s percent market share in CPUs along with an extensive portfolio of other semiconductor products. The decline in Moore’s Law, the rise of heterogeneous computing architectures, product and process missteps (e.g. KNL and OmniPath, both discontinued, Lustre stewardship now ended), along with reinvigorated rivals (principally AMD and recently Arm) have sent shock waves through the company.

You may recall Bob Swan was elevated from interim to permanent CEO last January. Speaking at the Credit Suisse technology conference this December, Swan said he wants change. This quote is from Wccftech:

“We think about having 30 percent share in a $230 [silicon] TAM that we think is going to grow to $300B [silicon] TAM over the next four years. And frankly, I’m trying to destroy the thinking about having 90 percent (CPU market) share inside our company because I think it limits our thinking, I think we miss technology transitions. We miss opportunities because we’re in some ways pre-occupied with protecting 90 instead of seeing a much bigger market with much more innovation going on, both Inside our four walls and outside our four walls.

“So we come to work in the morning with a 30 percent share, with every expectation over the next several years that we will play a larger and larger role in our customers’ success, and that doesn’t just (mean) CPUs. It means GPUs, it means Al, it does mean FPGAs, it means bringing these technologies together so we’re solving customers’ problems. So we’re looking at a company with roughly 30 percent share in a $288 silicon TAM, not CPU TAM but silicon TAM. We look at the investments we’ve been making over the last several years in these kind of key technology inflections: 5G, AI, autonomous, acquisitions, including Altera, that we think is more and more relevant both in the cloud but also AI the network and at the edge.”

There have been key executive changes. Rajeeb Hazra, corporate VP of Intel’s Data Center Group and GM for the Enterprise and Government Group, is retiring and his replacement has not yet been named. Gary Patton is leaving GlobalFoundaries where he was CTO and R&D SVP to become corporate Intel VP and GM of design enablement reporting to CTO Michael Mayberry.

Don’t count Intel out. It has enormous technical and financial capital. At SC19, Intel debuted its new XeGPU line with Ponte Vecchio as the top SKU aimed at HPC. There are plans for many variants aimed at different AI applications with the first parts expected to market this summer in a consumer application. Ponte Vecchio will be in Aurora. Intel’s Optane persistent memory product line is showing early traction. Of course, the Xeon CPU family still dominates the landscape – despite process hiccups; on the minus side AMD is now aiming for double digit market share and Arm is mounting a surge into the datacenter.

So the news on the product front is mixed.

That said, Intel is getting good marks for playing nicely with collaborators for its efforts on OpenHPC, the plug-and-play stack it championed that is now reasonably well established, and the more nascent Compute Express Link (CXL) CPU-to-device interconnect consortium. How oneAPI fares will be interesting to watch. Intel has a lot riding on it for its GPU line and presumably oneAPI will be how one ports aps to it.

Intel has its fingers in so many pies that foreseeing the company’s trajectory is no easy task.

INTEL 2019 HITS

Intel Debuts New GPU – Ponte Vecchio – and Outlines oneAPI

Intel Launches Cascade Lake Xeons with Up to 56 Cores

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product

At Hot Chips, Intel Shows New Nervana AI Training, Inference Chips

 

IBM’s challenges are somewhat different. Its gigantic $34B purchase of Red Hat, which closed in July, seems to be working as IBM seeks to embrace all things cloud and many things open source and Linux. The new question, really, is how does HPC fit into IBM’s evolving worldview and strategic plan.

Big Blue made massive bets here in the past. The latest chapter roughly starts with its decision to get out of the x86 business by selling it to Lenovo (servers, ‘13, PCs in ‘05). It gambled on the IBM Power microprocessor line and leading the OpenPOWER Foundation (with Nvidia, Mellanox, Tyan, and Google.) The intent was to create an alternative to the x86 ecosystem.

Where Intel was playing a closed-architecture, one-size fits all game, argued IBM, it would take a more collaborative and open approach. No doubt Intel would dispute that characterization. At SC15 long-time executive IBM Ken King argued the Power/OpenPOWER upside was huge, citing an ambitious 20-30 percent market share target.

IBM Power9 AC922 rendering

By SC16 many of the pieces were in place – OpenPOWER had 250-plus members, Power8+ with NVLink technology was out, work on OpenCAPI had started, IBM had launched the Minsky server with Power8, and Google/Rackspace announced plans for a Power9-based server supporting OCP. A confident King told HPCwire at SC16, “This year (2017) is about scale. We’re [IBM/OpenPOWER] only going to be effective if we get to 10-15-20 percent of the Linux socket market. Being at one or two percent won’t [do it].”

Skip a year and fast forward to ISC 2018 when Summit, the IBM-built supercomputer for the CORAL program, regained the top spot on the Top500 for the U.S. It was a stunning achievement. Summit has held the Top500 crown on the last four editions of the list and it is churning out impactful science.

Problem is, the market traction needed never adequately materialized. We won’t go into all the reasons but cost, effort to port aps, at least early on, and competitor pricing were all factors. Also the rise of accelerators (GPUs mostly) made CPUs look a bit mundane – not a good thing given the development costs needed to keep advancing the Power CPU line. Likewise AI, a persistent rumor but hardly a thunderous echo when IBM made its Power/OpenPOWER bet, burst onto the scene with unexpected force. All this while IBM’s cloud effort lagged its hopes and garnered attention in the C-suite.

This year at SC19 IBM introduced no new Power-based systems and provided no update on Power10 chip plans. The OpenPOWER Foundation has been moved to the Linux Foundation. IBM didn’t win any of the exascale awards; this shocked many observers (re: Summit’s success). Lastly, longtime IBM Dave Turek, now vice president of high performance and cognitive computing, described a startlingly new IBM HPC-AI strategy that sounded unlike typical IBM practice. It emphasizes selling small systems – as small as a single node, said Turek – to a much larger customer base. It’s based on using IBM AI software to analyze host infrastructure and application performance and to speed those efforts with minimum intrusion on the customer’s existing infrastructure.

This is from a Turek interview with HPCwire:

Dave Turek, IBM

“We’re trying to do strategically is get away from this rip and replace phenomenon that’s characterized HPC since the beginning of time…So we take a solution. It’s a small Power cluster. It’s got the Bayesian software on it. You have an existing, I don’t know, a Haswell cluster, a few years old, running simulations and because your last dose of capital from your enterprise was five years ago, you can’t get another bunch of money. What do you do?

“What we would do is bring in our cluster, put it in the datacenter, and bring up a database and give access to that database from both sides. So [a] simulation runs, it puts output parameters into the database. I know something’s arrived, I go and inspect that, my machine learning algorithms analyze that, and it makes recommendations of the input parameters. So next simulation, rinse and repeat. And as you progress through this, the Bayesian machine learning solution gets smarter and smarter and smarter. And you get to get to your end game quicker and quicker and quicker.”

The results, he says, can be game-changing: “[As a customer] I put it in a four-node cluster adjacent to my 2,000-node cluster, and I make my 2,000-node cluster behave as if it was an 8,000-node cluster? How long do I have to think about this?” You get the picture.

Many wonder if more IBM changes aren’t ahead and what role HPC will have in Big Blue’s future.

IBM’S CHANGING PERSPECTIVE

SC19: IBM Changes Its HPC-AI Game Plan

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

IBM Deepens Plunge into Open Source; OpenPOWER to Join Linux Foundation

 

  1. Quantum – The Haze is a Little Clearer but Solutions Not Nearer

What would technology be without a few kerfuffles? QC had a dandy around Google’s claim for achieving quantum supremacy to which IBM took vigorous exception and which spawned a tart response on the anonymous Twitter feed Quantum Bullshit Detector. Launched last spring QBD is generally on the lookout for what it detects as folly in QC. No doubt there’s lots to detect in QC. There’s an interesting overview of QBD written by Sophia Chen in Wired.

Google’s Sycamore quantum chip

Here’s a quick reprise of Google’s quantum supremacy claim, published in Nature: Google reported it was able to perform a task (a particular random number generation) using its 53-qubit processor, Sycamore, in 200 seconds versus what would take on the order 10,000 years on a supercomputer. The authors “estimated the classical computational cost” of running the supremacy circuits with simulations on Summit and on a large Google cluster. IBM demurred and argued it discovered a way to do the same task in two days (still hardly 200 seconds) on Summit.

In a sense, who cares. Many argue quantum supremacy is a silly idea to start with. Maybe. Practically, the Google engineering work was important, even if it didn’t constitute achieving quantum supremacy. Let’s move on.

Last year the haze around quantum computing thickened rather than thinned from the previous year. Many feel the smoky scene is even worse now. But maybe not. Last year we were expecting clearer answers. This year we know better. Here’s why:

  • Are we there yet? No. Actually, definitively no. There’s broad public agreement from virtually all the key players (IBM, Rigetti, Google, Microsoft, D-Wave) that practical applications lie years, many years, ahead. Leave aside the inevitable hype generated by government attention & funding (i.e. the $1.25B National Quantum Initiative Act signed a year ago). Jim Clarke, who leads Intel’s effort, says eight years is a good bet for reaching Quantum Advantage – the time QC is able to do something sufficiently better than classical computers to warrant switching. He may be optimistic.
  • The problem – We love the mystery but don’t understand it. Quantum computing is inherently mysterious and therefore fascinating. But most of the public announcements, including growing qubit counts, don’t really mean much to most us and as importantly don’t mean much in terms catapulting QC forward. Even as very coarse progress milestones the litany of papers and new larger systems and collaborations doesn’t tell us much yet.
A rendering of IBM Q System One

Just this week, IBM announced a three-year agreement with Japan, led by the University of Tokyo, to foster QC development. This is the third such IBM international agreement. Are they important? Yes. Will there be practical quantum computing at the end of the first three years of the last IBM-Japan effort? No. In September, D-Wave revealed the named of its forthcoming 5000-qubit quantum annealing system, Advantage, picked to emulate the idea of Quantum Advantage. Will it deliver?

I love this comment from John Martinis, head of Google’s comment on QC (semiconductor-based superconducting) tech’s challenge: “Breaking RSA is going to take, let’s say, 100 million physical qubits. And you know, right now we’re at what is it? 53. So, that’s going to take a few years.” Indeed, Google’s 54-qubit Sycamore chip actually functioned as a 53-qubit device during the supremacy exercise because one of the control wires broke.

What’s clearer today than last year, and more publicly agreed to by most of the QC community, is there won’t be some sudden breakthrough that makes quantum computing a practical tool soon. There’s lots of interesting, important work happening. Hyperscalers are getting more involved a la the AWS three-prong effort (portal for third party QC tech & services; hardware research collaboration; consulting effort to ID potential aps). Intel’s new cryo-controller chip is also interesting. Maybe it will become a component supplier to the QC world. S/W tools are edging forward. But…

Point is QC is far from ready – we should watch it, not with a jaded eye but a patient eye that screens out hype. There’s great QC technology being developed by very many organizations along what will be a long journey.

QUANTUM NOTABLES

Google Goes Public with Quantum Supremacy Achievement; IBM Disagrees

IBM Opens Quantum Computing Center; Announces 53-Qubit

IBM Pitches Quantum Volume as Benchmarking Tool for Gate-based Quantum Computers

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

 

  1. Bits and Pieces from around the HPC Community

MLPerf, the AI benchmarking effort gaining traction, issued an inferencing suite to go with its training exercises; the first inferencing results came out in November with Nvidia claiming a good showing. Check them out. Has anyone heard more on Deep500 – after holding a well-attended session at SC18 to discuss formative ideas there was no sign of it at SC19. Deep500’s intent is sort of evident from the title, create an AI benchmarking tool and competition spanning small to very big systems.

Here’s a well-earned kudo. Robert Dennard won SIA’s Robert Noyce Award last year. He is, of course, the father of Dennard Scaling, which sadly has run its course, and perhaps more importantly, the DRAM. Here’s a link to a nice tribute. Dennard has had great impact on our industry.

As always, there were significant personnel changes – the departure of Rajeeb Hazra, longtime HPC exec at Intel is one. It will be interesting to see what he does next. Barry Bolding, a former CSO at Cray, joined AWS as director, global HPC. Someone HPCwire has leaned for insight about life sciences in HPC, Ari Berman, was promoted to CEO of BioTeam consulting – his team advised on the latest design of Biowulf, NIH’s now 20-year-old constantly evolving HPC system. Gina Tourassi was appointed as director of the National Center for Computational Sciences, ORNL. Congrats! Trump announced plans to nominate Sethuraman “Panch” Panchanathan to serve as the 15th director of the NSF

Quick look at market numbers. HPC server sales in the first half of 2019 year totaled $6.7 billion, while 2018 sales grew 15 percent overall according to Hyperion. That’s a strong picture of health. Maybe more impressive (or scary) is that 11 hyperscalers spent more than $1 billion apiece on IT infrastructure in 2018; three spent more than $5 billion and one, Google, broke the $10 billion spend barrier, according to Intersect360 Research. The concentration of buying power with the cloud community is astonishing.

HPE’s Spaceborne computer returned home after 615 days on the International Space Station. It was a 1 TFlops system built with OTS parts to see if they could withstand the radiation. It did in the sense that although error rates were higher than normal on the ground, they were manageable and the system was able to do real work. Nvidia didn’t launch any new monster-size GPUs but it gobbled up interconnect (InfiniBand and Ethernet) specialist Mellanox for $6.9 billion (deal hasn’t closed yet). Nvidia research chief, William Daly, talked about the company’s R&D strategy at GTC19 – perhaps not surprisingly productizing is the central tenet.

Here’s good closing note. Venerable Titan Supercomputer was retired on August 1st. Housed at OLCF, Titan ranked as one of the world’s top 10 fastest supercomputers from its debut as No. 1 in 2012 until June 2019. During that time, Titan delivered more than 26 billion core hours of computing time. When launched, it represented a new approach that combined 18,688 AMD 16-core Opteron CPUs with 18,688 Nvidia Kepler K20 GPUs. OLCF Program Director Buddy Bland recalls, “Choosing a GPU-accelerated system was considered a risky choice.” Job well-done.

Happy holidays to all.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Russian Supercomputer Employed to Develop COVID-19 Treatment

March 31, 2020

From Summit to [email protected], global supercomputing is continuing to mobilize against the coronavirus pandemic by crunching massive problems like epidemiology, therapeutic development and vaccine development. The latest a Read more…

By Staff report

What’s New in HPC Research: Supersonic Jets, Skin Modeling, Astrophysics & More

March 31, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Pandemic ‘Wipes Out’ 2020 HPC Market Growth, Flat to 12% Drop Expected

March 31, 2020

As the world battles the still accelerating novel coronavirus, the HPC community has mounted a forceful response to the pandemic on many fronts. But these efforts won't inoculate the HPC industry from the economic effects of COVID-19. Market watcher Intersect360 Research has revised its 2020 forecast for HPC products and services, projecting... Read more…

By Tiffany Trader

LLNL Leverages Supercomputing to Identify COVID-19 Antibody Candidates

March 30, 2020

As COVID-19 sweeps the globe to devastating effect, supercomputers around the world are spinning up to fight back by working on diagnosis, epidemiology, treatment and vaccine development. Now, Lawrence Livermore National Read more…

By Staff report

Weather at Exascale: Load Balancing for Heterogeneous Systems

March 30, 2020

The first months of 2020 were dominated by weather and climate supercomputing news, with major announcements coming from the UK, the European Centre for Medium-Range Weather Forecasts and the U.S. National Oceanic and At Read more…

By Oliver Peckham

AWS Solution Channel

Amazon FSx for Lustre Update: Persistent Storage for Long-Term, High-Performance Workloads

Last year I wrote about Amazon FSx for Lustre and told you how our customers can use it to create pebibyte-scale, highly parallel POSIX-compliant file systems that serve thousands of simultaneous clients driving millions of IOPS (Input/Output Operations per Second) with sub-millisecond latency. Read more…

Q&A Part Two: ORNL’s Pooser on Progress in Quantum Communication

March 30, 2020

Quantum computing seems to get more than its fair share of attention compared to quantum communication. That’s despite the fact that quantum networking may be nearer to becoming a practical reality. In this second inst Read more…

By John Russell

Pandemic ‘Wipes Out’ 2020 HPC Market Growth, Flat to 12% Drop Expected

March 31, 2020

As the world battles the still accelerating novel coronavirus, the HPC community has mounted a forceful response to the pandemic on many fronts. But these efforts won't inoculate the HPC industry from the economic effects of COVID-19. Market watcher Intersect360 Research has revised its 2020 forecast for HPC products and services, projecting... Read more…

By Tiffany Trader

Weather at Exascale: Load Balancing for Heterogeneous Systems

March 30, 2020

The first months of 2020 were dominated by weather and climate supercomputing news, with major announcements coming from the UK, the European Centre for Medium- Read more…

By Oliver Peckham

Q&A Part Two: ORNL’s Pooser on Progress in Quantum Communication

March 30, 2020

Quantum computing seems to get more than its fair share of attention compared to quantum communication. That’s despite the fact that quantum networking may be Read more…

By John Russell

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

Conversation: ANL’s Rick Stevens on DoE’s AI for Science Project

March 23, 2020

With release of the Department of Energy’s AI for Science report in late February, the effort to build a national AI program, modeled loosely on the U.S. Exascale Initiative, enters a new phase. Project leaders have already had early discussions with Congress... Read more…

By John Russell

Servers Headed to Junkyard Find 2nd Life Fighting Cancer in Clusters

March 20, 2020

Ottawa-based charitable organization Cancer Computer is on a mission to stamp out cancer and other life-threatening diseases, including coronavirus, by putting Read more…

By Tiffany Trader

Kubernetes and HPC Applications in Hybrid Cloud Environments – Part II

March 19, 2020

With the rise of cloud services, CIOs are recognizing that applications, middleware, and infrastructure running in various compute environments need a common management and operating model. Maintaining different application and middleware stacks on-premises and in cloud environments, by possibly using different specialized infrastructure and application... Read more…

By Daniel Gruber,Burak Yenier and Wolfgang Gentzsch, UberCloud

[email protected] Turns Its Massive Crowdsourced Computer Network Against COVID-19

March 16, 2020

For gamers, fighting against a global crisis is usually pure fantasy – but now, it’s looking more like a reality. As supercomputers around the world spin up Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Global Supercomputing Is Mobilizing Against COVID-19

March 12, 2020

Tech has been taking some heavy losses from the coronavirus pandemic. Global supply chains have been disrupted, virtually every major tech conference taking place over the next few months has been canceled... Read more…

By Oliver Peckham

[email protected] Rallies a Legion of Computers Against the Coronavirus

March 24, 2020

Last week, we highlighted [email protected], a massive, crowdsourced computer network that has turned its resources against the coronavirus pandemic sweeping the globe – but [email protected] isn’t the only game in town. The internet is buzzing with crowdsourced computing... Read more…

By Oliver Peckham

DoE Expands on Role of COVID-19 Supercomputing Consortium

March 25, 2020

After announcing the launch of the COVID-19 High Performance Computing Consortium on Sunday, the Department of Energy yesterday provided more details on its sco Read more…

By John Russell

Steve Scott Lays Out HPE-Cray Blended Product Roadmap

March 11, 2020

Last week, the day before the El Capitan processor disclosures were made at HPE's new headquarters in San Jose, Steve Scott (CTO for HPC & AI at HPE, and former Cray CTO) was on-hand at the Rice Oil & Gas HPC conference in Houston. He was there to discuss the HPE-Cray transition and blended roadmap, as well as his favorite topic, Cray's eighth-gen networking technology, Slingshot. Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Tech Conferences Are Being Canceled Due to Coronavirus

March 3, 2020

Several conferences scheduled to take place in the coming weeks, including Nvidia’s GPU Technology Conference (GTC) and the Strata Data + AI conference, have Read more…

By Alex Woodie

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Cray to Provide NOAA with Two AMD-Powered Supercomputers

February 24, 2020

The United States’ National Oceanic and Atmospheric Administration (NOAA) last week announced plans for a major refresh of its operational weather forecasting supercomputers, part of a 10-year, $505.2 million program, which will secure two HPE-Cray systems for NOAA’s National Weather Service to be fielded later this year and put into production in early 2022. Read more…

By Tiffany Trader

Exascale Watch: El Capitan Will Use AMD CPUs & GPUs to Reach 2 Exaflops

March 4, 2020

HPE and its collaborators reported today that El Capitan, the forthcoming exascale supercomputer to be sited at Lawrence Livermore National Laboratory and serve Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

TACC Supercomputers Run Simulations Illuminating COVID-19, DNA Replication

March 19, 2020

As supercomputers around the world spin up to combat the coronavirus, the Texas Advanced Computing Center (TACC) is announcing results that may help to illumina Read more…

By Staff report

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Offici Read more…

By Staff report

Summit Joins the Fight Against the Coronavirus

March 6, 2020

With the coronavirus sweeping the globe, tech conferences and supply chains are being hit hard – but now, tech is hitting back. Oak Ridge National Laboratory Read more…

By Staff report

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This