Blasts from the (Recent) Past and Hopes for the Future

By John Russell

December 23, 2019

What does 2020 look like to you? What did 2019 look like?

Lots happened but the main trends were carryovers from 2018 – AI messaging again blanketed everything; the roll-out of new big machines and exascale announcements continued; processor diversity and system disaggregation kicked up a notch; hyperscalers continued flexing their muscles (think AWS and its Graviton2 processor); and the U.S. and China continued their awkward trade war.

That’s hardly all. Quantum computing, though nicely-funded, remained a mystery to most. Intel got (back) into the GPU (Xe) business, AMD stayed surefooted (launched next-gen Epyc CPU on 7nm), and no one is quite sure what IBM’s next HPC move will be. TACC stood up Frontera (Dell), now the fastest academic computer in the world.

You get the idea. It was an eventful year…again.

For many participants 2019 presented a risky landscape in which the new normal (see trends above) started exacting tolls and delivering rewards. Amid the many swirling tornadoes of good and ill fortune dotting the HPC landscape there was a big winner and (perhaps) an unexpected loser. Presented here are a few highlights from the year, some thoughts about what 2019 portends, and links to a smattering of HPCwire articles as a quick 2019 rewind. Apologies for the many important trends/events omitted (surging Arm, Nvidia’s purchase of Mellanox, etc.)

  1. It was Good to Be Cray This Year

Cray had a phenomenal transformational year.

Dogged for years by questions about whether a stand-alone supercomputer company could survive in such a narrow, boom-bust market, Cray is thriving and no longer standing alone. Thank you HPE et al. Many questions remain following HPE’s $1.3B purchase of Cray in May but no matter how you look at, 2019 was Cray’s year.

As they say, to the victor goes the spoils:

  • Exascale Trifecta. Cray swept the exascale sweepstakes winning all three procurements (Aurora, with Intel at Argonne National Laboratory; Frontier with AMD at Oak Ridge National Laboratory; and El Capitan at Lawrence Livermore National Laboratory).
  • Shasta & Slingshot. Successful roll-out of Cray’s new system architecture first announced in late 2018. This was the big bet upon which all else, or almost all else, rests. The company declared its product portfolio refresh complete and exascale era ready in October.
  • AMD and Arm. Cray seems to be a full participant in the burgeoning of processor diversity. Case-in-point: Its new collaboration with Fujitsu to develop a commercial supercomputer powered by the Fujitsu A64FX Arm-based processor, the same chip going into the post-K “Fugaku” supercomputer. It also has significant experience using AMD processors.
  • Welcome to HPE. Fresh from gobbling SGI ($275M, ’16), HPE should be a good home for Cray, which will boost HPE’s ability to pursue high-end procurements and potentially speed the combined company’s development of next-generation technologies. HPE CEO Antonio Neri sizes the supercomputing/exascale sector at between $2.5 to $5 billion, while the sub-supercomputing HPC sector at $8.5 billion. Cray’s HPC storage business is another plus.

Just wow.

Cray’s heritage, of course, stretches back to 1972 when it was founded by Seymour Cray. The company has a leadership position in the top 100 supercomputer installations around the globe and is one of only a handful of companies capable of building these world-class supercomputers. Headquartered in Seattle, Cray has roughly1,300 employees (at time of purchase) and reported revenue of $456 million in its most recent fiscal year, up 16 percent year over year.

It seems a reasonable guess that Cray’s good fortune was more than just chance. Given the HPE investment (price was 3X revenues, 20X earnings), DoE’s exascale procurement investments, and Cray’s stature in US supercomputing amid global tensions, it’s likely many forces, mutually-aware, helped coax the deal forward. In any case, it’s good for Cray and for HPC.

Antonio Neri, HPE CEO

HPE CEO Antonio Neri has said Cray will continue as an entity and brand with HPE. Pete Ungaro, Cray’s former CEO now becomes SVP & GM, for HPC and AI at HPE. Lots of eyes will be on Neri and Ungaro as HPE moves forward. Will there be a senior leadership shakeout or can Nero get his talented senior team to work together in ways that make sense? Absorbing SGI seemed to go well although the brand seemed to vanish once inside.

At HPCwire we have been wondering about what the new HPE strategy will be, what will become the broader HPE technology and product roadmap, etc.? Stay tuned.

CRAY HIGH POINTS

Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

Cray, AMD to Extend DOE’s Exascale Frontier

HPE to Acquire Cray for $1.3B

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio

 

  1. Is the Top500 Finally Topping Out?

My pick for the biggest steam loser in 2019 may surprise. It’s the Top500 and maybe the occasional loss of steam is just part of the natural cycle. The November ’19 list was a bit of a yawn and perhaps not especially accurate. Summit (148 PF, Rmax) and Sierra (96 PF Rmax) and remained at the top. China’s Sunway (93 PF Rmax) and Tihanne-2A (61 PF Rmax) retained third and fourth. However there were reports of systems from China that ran the Linpack benchmark and submitted results that would have put them atop the list but later withdrew them in attempt avoid additional blacklisting by the US.

It almost doesn’t matter. Not the trade war – that matters. However handicapping world computer progress and leadership based on performance on the Top500 seems almost passé as a showcase for startlingly new technologies. It is still a rich list with lots to learn from it but whether LINPACK remains a good metric or whether the systems entered are really comparable or whether taking the top honors is worth the effort expended are tougher questions today. It would be interesting to get IBM’s candid take on the question given IBM’s success with Summit and Sierra on the Top500 but its less successful effort to turn smaller Summit look-alikes into broad commercial (system or processor) traction. Designing and standing up these giants isn’t trivial or cheap.

The list isn’t going away. We love lists. They have value. But the investment-reward ratio and now, potentially questionable bragging rights, undermine the Top500’s value as anointer of the top dog in supercomputing. In some ways, the secondary lists (Green500 and HPCG) are more interesting and crowding the spotlight. This is hardly a new gripe (mea culpa) but the critical mass of opinion may be shifting away from the value of the Top500. There was distinctly less buzz at SC this year around the latest list.

TOP500 TOUCHPOINTS

Top500: US Maintains Performance Lead; Arm Tops Green500

Top500 Purely Petaflops; US Maintains Performance Lead

 

  1. AI in Science – the Next Exascale Initiative?

This year virtually all the major systems makers offered HPC-AI-aimed solutions – typically with one or two ‘supervisor’ CPUs and 4-8 accelerators. There are variations. Established chipmakers worked to beef up memory bandwidth, IO performance, and mixed precision capabilities. Multi-die packaging including the use of die with varying feature sizes in the same package started to take hold. Overall, these were continuations of existing trends with a more definite distinction emerging between training and inferencing platforms. One can see a menu of AI inference chips targeting specific applications coming.

On balance, the AI marketing drumbeat was even louder and more pervasive than last year.

Slide from Cerebras emphasizing its wafer-scale size ambitions

More interesting were 1) efforts by the science community to start shaping a larger strategy to fuse AI with HPC and leveraging the synergy, and 2) the flurry of AI chips in various stages of readiness coming from start-ups (Graphcore, Cerebras, NovuMind, Wave Computing, Cambricon, etc). Many of these potential disrupters will find buyers. Intel, of course, just snapped up Habana Labs for $2B.

Let’s look at the new U.S. AI Initiative signed in September and ramping efforts to define a science strategy. The Department of Energy has held a series of AI for Science town halls led by Kathy Yelick (Lawrence Berkeley National Laboratory), Jeff Nichols (Oak Ridge National Laboratory), and Rick Stevens (Argonne National Laboratory) seeking input from a broad science constituency from academia, government, and industry. In theory, this could lead to a funded AI program modeled loosely on the current Exascale Initiative.

Their formal report was due by the end of 2019 but has been pushed slightly into January when we’ll get the first glimpse of the recommendations which are likely to encompass hardware, software, and application areas.

Stevens told HPCwire, “Clearly there’s huge progress in the internet space, but those Facebooks and Googles and Microsofts and Amazons and so on, those guys are not going to be the primary drivers for AI in areas like high-energy physics or nuclear energy or wind power or new materials for solar or for cancer research – it’s not their business focus. We recognize that the challenge is how to leverage the investments made by the private sector to build on those [advances] to add what’s missing for scientific applications — and there’s lots of things missing. And then figure out what the computing community has to do to position the infrastructure and our investments in software and algorithms and math and so on to bring the AI opportunity closer to where we currently are.”

Meanwhile test beds are sprouting up in various national labs and NDAs are being signed with most of the new AI chip crowd. ANL is a good example.

Rick Stevens, Argonne National Laboratory

“We’re setting up an AI accelerator test bed [and] it’s going to be open to the whole community. The accelerator market is filled with companies, right. Our intent is to populate the test bed with many of these as we can get working hardware from. So this one’s [Cerebras chip] a slightly different situation. It’s not nominally aimed at the test bed, but it’s actually a working system for us to do our hardest AI problems on.”

After all the ‘AI’ groundwork done by hyperscalers and enterprise community it will be fascinating to watch what contributions the science community makes.

Is AI the new Exascale? We’ll see.

AI LOOK-BACK

AI for Science Town Hall Series Kicks off at Argonne

AI is the Next Exascale – Rick Stevens on What that Means and Why It’s Important

Trump Administration and NIST Issue AI Standards Development Plan

 

  1. IBM & Intel – Two Giants with Giant Challenges

That Intel and IBM are large impressive companies with formidable reach is a given. It’s a massive mistake to underestimate their strengths or to dismiss them. But stuff happens. Markets shift. Technologies plateau. Longtime rivals and upstart competitors all clamber for a share of the pie. Both Intel and IBM are in the midst of massive pivots that encompass their HPC and other activities.

Robert Swan, Intel CEO (Credit: Intel Corporation)

First, Intel. For decades it has been king of the microprocessor market – leveraging design and manufacturing technology leadership to claim a mid-to-high 90s percent market share in CPUs along with an extensive portfolio of other semiconductor products. The decline in Moore’s Law, the rise of heterogeneous computing architectures, product and process missteps (e.g. KNL and OmniPath, both discontinued, Lustre stewardship now ended), along with reinvigorated rivals (principally AMD and recently Arm) have sent shock waves through the company.

You may recall Bob Swan was elevated from interim to permanent CEO last January. Speaking at the Credit Suisse technology conference this December, Swan said he wants change. This quote is from Wccftech:

“We think about having 30 percent share in a $230 [silicon] TAM that we think is going to grow to $300B [silicon] TAM over the next four years. And frankly, I’m trying to destroy the thinking about having 90 percent (CPU market) share inside our company because I think it limits our thinking, I think we miss technology transitions. We miss opportunities because we’re in some ways pre-occupied with protecting 90 instead of seeing a much bigger market with much more innovation going on, both Inside our four walls and outside our four walls.

“So we come to work in the morning with a 30 percent share, with every expectation over the next several years that we will play a larger and larger role in our customers’ success, and that doesn’t just (mean) CPUs. It means GPUs, it means Al, it does mean FPGAs, it means bringing these technologies together so we’re solving customers’ problems. So we’re looking at a company with roughly 30 percent share in a $288 silicon TAM, not CPU TAM but silicon TAM. We look at the investments we’ve been making over the last several years in these kind of key technology inflections: 5G, AI, autonomous, acquisitions, including Altera, that we think is more and more relevant both in the cloud but also AI the network and at the edge.”

There have been key executive changes. Rajeeb Hazra, corporate VP of Intel’s Data Center Group and GM for the Enterprise and Government Group, is retiring and his replacement has not yet been named. Gary Patton is leaving GlobalFoundaries where he was CTO and R&D SVP to become corporate Intel VP and GM of design enablement reporting to CTO Michael Mayberry.

Don’t count Intel out. It has enormous technical and financial capital. At SC19, Intel debuted its new XeGPU line with Ponte Vecchio as the top SKU aimed at HPC. There are plans for many variants aimed at different AI applications with the first parts expected to market this summer in a consumer application. Ponte Vecchio will be in Aurora. Intel’s Optane persistent memory product line is showing early traction. Of course, the Xeon CPU family still dominates the landscape – despite process hiccups; on the minus side AMD is now aiming for double digit market share and Arm is mounting a surge into the datacenter.

So the news on the product front is mixed.

That said, Intel is getting good marks for playing nicely with collaborators for its efforts on OpenHPC, the plug-and-play stack it championed that is now reasonably well established, and the more nascent Compute Express Link (CXL) CPU-to-device interconnect consortium. How oneAPI fares will be interesting to watch. Intel has a lot riding on it for its GPU line and presumably oneAPI will be how one ports aps to it.

Intel has its fingers in so many pies that foreseeing the company’s trajectory is no easy task.

INTEL 2019 HITS

Intel Debuts New GPU – Ponte Vecchio – and Outlines oneAPI

Intel Launches Cascade Lake Xeons with Up to 56 Cores

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product

At Hot Chips, Intel Shows New Nervana AI Training, Inference Chips

 

IBM’s challenges are somewhat different. Its gigantic $34B purchase of Red Hat, which closed in July, seems to be working as IBM seeks to embrace all things cloud and many things open source and Linux. The new question, really, is how does HPC fit into IBM’s evolving worldview and strategic plan.

Big Blue made massive bets here in the past. The latest chapter roughly starts with its decision to get out of the x86 business by selling it to Lenovo (servers, ‘13, PCs in ‘05). It gambled on the IBM Power microprocessor line and leading the OpenPOWER Foundation (with Nvidia, Mellanox, Tyan, and Google.) The intent was to create an alternative to the x86 ecosystem.

Where Intel was playing a closed-architecture, one-size fits all game, argued IBM, it would take a more collaborative and open approach. No doubt Intel would dispute that characterization. At SC15 long-time executive IBM Ken King argued the Power/OpenPOWER upside was huge, citing an ambitious 20-30 percent market share target.

IBM Power9 AC922 rendering

By SC16 many of the pieces were in place – OpenPOWER had 250-plus members, Power8+ with NVLink technology was out, work on OpenCAPI had started, IBM had launched the Minsky server with Power8, and Google/Rackspace announced plans for a Power9-based server supporting OCP. A confident King told HPCwire at SC16, “This year (2017) is about scale. We’re [IBM/OpenPOWER] only going to be effective if we get to 10-15-20 percent of the Linux socket market. Being at one or two percent won’t [do it].”

Skip a year and fast forward to ISC 2018 when Summit, the IBM-built supercomputer for the CORAL program, regained the top spot on the Top500 for the U.S. It was a stunning achievement. Summit has held the Top500 crown on the last four editions of the list and it is churning out impactful science.

Problem is, the market traction needed never adequately materialized. We won’t go into all the reasons but cost, effort to port aps, at least early on, and competitor pricing were all factors. Also the rise of accelerators (GPUs mostly) made CPUs look a bit mundane – not a good thing given the development costs needed to keep advancing the Power CPU line. Likewise AI, a persistent rumor but hardly a thunderous echo when IBM made its Power/OpenPOWER bet, burst onto the scene with unexpected force. All this while IBM’s cloud effort lagged its hopes and garnered attention in the C-suite.

This year at SC19 IBM introduced no new Power-based systems and provided no update on Power10 chip plans. The OpenPOWER Foundation has been moved to the Linux Foundation. IBM didn’t win any of the exascale awards; this shocked many observers (re: Summit’s success). Lastly, longtime IBM Dave Turek, now vice president of high performance and cognitive computing, described a startlingly new IBM HPC-AI strategy that sounded unlike typical IBM practice. It emphasizes selling small systems – as small as a single node, said Turek – to a much larger customer base. It’s based on using IBM AI software to analyze host infrastructure and application performance and to speed those efforts with minimum intrusion on the customer’s existing infrastructure.

This is from a Turek interview with HPCwire:

Dave Turek, IBM

“We’re trying to do strategically is get away from this rip and replace phenomenon that’s characterized HPC since the beginning of time…So we take a solution. It’s a small Power cluster. It’s got the Bayesian software on it. You have an existing, I don’t know, a Haswell cluster, a few years old, running simulations and because your last dose of capital from your enterprise was five years ago, you can’t get another bunch of money. What do you do?

“What we would do is bring in our cluster, put it in the datacenter, and bring up a database and give access to that database from both sides. So [a] simulation runs, it puts output parameters into the database. I know something’s arrived, I go and inspect that, my machine learning algorithms analyze that, and it makes recommendations of the input parameters. So next simulation, rinse and repeat. And as you progress through this, the Bayesian machine learning solution gets smarter and smarter and smarter. And you get to get to your end game quicker and quicker and quicker.”

The results, he says, can be game-changing: “[As a customer] I put it in a four-node cluster adjacent to my 2,000-node cluster, and I make my 2,000-node cluster behave as if it was an 8,000-node cluster? How long do I have to think about this?” You get the picture.

Many wonder if more IBM changes aren’t ahead and what role HPC will have in Big Blue’s future.

IBM’S CHANGING PERSPECTIVE

SC19: IBM Changes Its HPC-AI Game Plan

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

IBM Deepens Plunge into Open Source; OpenPOWER to Join Linux Foundation

 

  1. Quantum – The Haze is a Little Clearer but Solutions Not Nearer

What would technology be without a few kerfuffles? QC had a dandy around Google’s claim for achieving quantum supremacy to which IBM took vigorous exception and which spawned a tart response on the anonymous Twitter feed Quantum Bullshit Detector. Launched last spring QBD is generally on the lookout for what it detects as folly in QC. No doubt there’s lots to detect in QC. There’s an interesting overview of QBD written by Sophia Chen in Wired.

Google’s Sycamore quantum chip

Here’s a quick reprise of Google’s quantum supremacy claim, published in Nature: Google reported it was able to perform a task (a particular random number generation) using its 53-qubit processor, Sycamore, in 200 seconds versus what would take on the order 10,000 years on a supercomputer. The authors “estimated the classical computational cost” of running the supremacy circuits with simulations on Summit and on a large Google cluster. IBM demurred and argued it discovered a way to do the same task in two days (still hardly 200 seconds) on Summit.

In a sense, who cares. Many argue quantum supremacy is a silly idea to start with. Maybe. Practically, the Google engineering work was important, even if it didn’t constitute achieving quantum supremacy. Let’s move on.

Last year the haze around quantum computing thickened rather than thinned from the previous year. Many feel the smoky scene is even worse now. But maybe not. Last year we were expecting clearer answers. This year we know better. Here’s why:

  • Are we there yet? No. Actually, definitively no. There’s broad public agreement from virtually all the key players (IBM, Rigetti, Google, Microsoft, D-Wave) that practical applications lie years, many years, ahead. Leave aside the inevitable hype generated by government attention & funding (i.e. the $1.25B National Quantum Initiative Act signed a year ago). Jim Clarke, who leads Intel’s effort, says eight years is a good bet for reaching Quantum Advantage – the time QC is able to do something sufficiently better than classical computers to warrant switching. He may be optimistic.
  • The problem – We love the mystery but don’t understand it. Quantum computing is inherently mysterious and therefore fascinating. But most of the public announcements, including growing qubit counts, don’t really mean much to most us and as importantly don’t mean much in terms catapulting QC forward. Even as very coarse progress milestones the litany of papers and new larger systems and collaborations doesn’t tell us much yet.
A rendering of IBM Q System One

Just this week, IBM announced a three-year agreement with Japan, led by the University of Tokyo, to foster QC development. This is the third such IBM international agreement. Are they important? Yes. Will there be practical quantum computing at the end of the first three years of the last IBM-Japan effort? No. In September, D-Wave revealed the named of its forthcoming 5000-qubit quantum annealing system, Advantage, picked to emulate the idea of Quantum Advantage. Will it deliver?

I love this comment from John Martinis, head of Google’s comment on QC (semiconductor-based superconducting) tech’s challenge: “Breaking RSA is going to take, let’s say, 100 million physical qubits. And you know, right now we’re at what is it? 53. So, that’s going to take a few years.” Indeed, Google’s 54-qubit Sycamore chip actually functioned as a 53-qubit device during the supremacy exercise because one of the control wires broke.

What’s clearer today than last year, and more publicly agreed to by most of the QC community, is there won’t be some sudden breakthrough that makes quantum computing a practical tool soon. There’s lots of interesting, important work happening. Hyperscalers are getting more involved a la the AWS three-prong effort (portal for third party QC tech & services; hardware research collaboration; consulting effort to ID potential aps). Intel’s new cryo-controller chip is also interesting. Maybe it will become a component supplier to the QC world. S/W tools are edging forward. But…

Point is QC is far from ready – we should watch it, not with a jaded eye but a patient eye that screens out hype. There’s great QC technology being developed by very many organizations along what will be a long journey.

QUANTUM NOTABLES

Google Goes Public with Quantum Supremacy Achievement; IBM Disagrees

IBM Opens Quantum Computing Center; Announces 53-Qubit

IBM Pitches Quantum Volume as Benchmarking Tool for Gate-based Quantum Computers

Intel’s Jim Clarke on its New Cryo-controller and why Intel isn’t Late to the Quantum Party

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

 

  1. Bits and Pieces from around the HPC Community

MLPerf, the AI benchmarking effort gaining traction, issued an inferencing suite to go with its training exercises; the first inferencing results came out in November with Nvidia claiming a good showing. Check them out. Has anyone heard more on Deep500 – after holding a well-attended session at SC18 to discuss formative ideas there was no sign of it at SC19. Deep500’s intent is sort of evident from the title, create an AI benchmarking tool and competition spanning small to very big systems.

Here’s a well-earned kudo. Robert Dennard won SIA’s Robert Noyce Award last year. He is, of course, the father of Dennard Scaling, which sadly has run its course, and perhaps more importantly, the DRAM. Here’s a link to a nice tribute. Dennard has had great impact on our industry.

As always, there were significant personnel changes – the departure of Rajeeb Hazra, longtime HPC exec at Intel is one. It will be interesting to see what he does next. Barry Bolding, a former CSO at Cray, joined AWS as director, global HPC. Someone HPCwire has leaned for insight about life sciences in HPC, Ari Berman, was promoted to CEO of BioTeam consulting – his team advised on the latest design of Biowulf, NIH’s now 20-year-old constantly evolving HPC system. Gina Tourassi was appointed as director of the National Center for Computational Sciences, ORNL. Congrats! Trump announced plans to nominate Sethuraman “Panch” Panchanathan to serve as the 15th director of the NSF

Quick look at market numbers. HPC server sales in the first half of 2019 year totaled $6.7 billion, while 2018 sales grew 15 percent overall according to Hyperion. That’s a strong picture of health. Maybe more impressive (or scary) is that 11 hyperscalers spent more than $1 billion apiece on IT infrastructure in 2018; three spent more than $5 billion and one, Google, broke the $10 billion spend barrier, according to Intersect360 Research. The concentration of buying power with the cloud community is astonishing.

HPE’s Spaceborne computer returned home after 615 days on the International Space Station. It was a 1 TFlops system built with OTS parts to see if they could withstand the radiation. It did in the sense that although error rates were higher than normal on the ground, they were manageable and the system was able to do real work. Nvidia didn’t launch any new monster-size GPUs but it gobbled up interconnect (InfiniBand and Ethernet) specialist Mellanox for $6.9 billion (deal hasn’t closed yet). Nvidia research chief, William Daly, talked about the company’s R&D strategy at GTC19 – perhaps not surprisingly productizing is the central tenet.

Here’s good closing note. Venerable Titan Supercomputer was retired on August 1st. Housed at OLCF, Titan ranked as one of the world’s top 10 fastest supercomputers from its debut as No. 1 in 2012 until June 2019. During that time, Titan delivered more than 26 billion core hours of computing time. When launched, it represented a new approach that combined 18,688 AMD 16-core Opteron CPUs with 18,688 Nvidia Kepler K20 GPUs. OLCF Program Director Buddy Bland recalls, “Choosing a GPU-accelerated system was considered a risky choice.” Job well-done.

Happy holidays to all.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Amid Upbeat Earnings, Intel to Cut 1% of Employees, Add as Many

January 24, 2020

For all the sniping two tech old timers take, both IBM and Intel announced surprisingly upbeat earnings this week. IBM CEO Ginny Rometty was all smiles at this week’s World Economic Forum in Davos, Switzerland, after  Read more…

By Doug Black

Indiana University Dedicates ‘Big Red 200’ Cray Shasta Supercomputer

January 24, 2020

After six months of celebrations, Indiana University (IU) officially marked its bicentennial on Monday – and it saved the best for last, inaugurating Big Red 200, a new AI-focused supercomputer that joins the ranks of Read more…

By Staff report

What’s New in HPC Research: Tsunamis, Wildfires, the Large Hadron Collider & More

January 24, 2020

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Toshiba Promises Quantum-Like Advantage on Standard Hardware

January 23, 2020

Toshiba has invented an algorithm that it says delivers a 10-fold improvement for a select class of computational problems, without the need for exotic hardware. In fact, the company's simulated bifurcation algorithm is Read more…

By Tiffany Trader

Energy Research Combines HPC, 3D Manufacturing

January 23, 2020

A federal energy research initiative is gaining momentum with the release of a contract award aimed at using supercomputing to harness 3D printing technology that would boost the performance of power generators. Partn Read more…

By George Leopold

AWS Solution Channel

Challenging the barriers to High Performance Computing in the Cloud

Cloud computing helps democratize High Performance Computing by placing powerful computational capabilities in the hands of more researchers, engineers, and organizations who may lack access to sufficient on-premises infrastructure. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

TACC Highlights Its Upcoming ‘IsoBank’ Isotope Database

January 22, 2020

Isotopes – elemental variations that contain different numbers of neutrons – can help researchers unearth the past of an object, especially the few hundred isotopes that are known to be stable over time. However, iso Read more…

By Oliver Peckham

Toshiba Promises Quantum-Like Advantage on Standard Hardware

January 23, 2020

Toshiba has invented an algorithm that it says delivers a 10-fold improvement for a select class of computational problems, without the need for exotic hardware Read more…

By Tiffany Trader

In Advanced Computing and HPC, Dell EMC Sets Sights on the Broader Market Middle 

January 22, 2020

If the leading advanced computing/HPC server vendors were in the batting lineup of a baseball team, Dell EMC would be going for lots of singles and doubles – Read more…

By Doug Black

DNA-Based Storage Nears Scalable Reality with New $25 Million Project

January 21, 2020

DNA-based storage, which involves storing binary code in the four nucleotides that constitute DNA, has been a moonshot for high-density data storage since the 1960s. Since the first successful experiments in the 1980s, researchers have made a series of major strides toward implementing DNA-based storage at scale, such as improving write times and storage density and enabling easier file identification and extraction. Now, a new $25 million... Read more…

By Oliver Peckham

AMD Recruits Intel, IBM Execs; Pending Layoffs Reported at Intel Data Platform Group

January 17, 2020

AMD has raided Intel and IBM for new senior managers, one of whom will replace an AMD executive who has played a prominent role during the company’s recharged Read more…

By Doug Black

Atos-AMD System to Quintuple Supercomputing Power at European Centre for Medium-Range Weather Forecasts

January 15, 2020

The United Kingdom-based European Centre for Medium-Range Weather Forecasts (ECMWF), a supercomputer-powered weather forecasting organization backed by most of Read more…

By Oliver Peckham

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

White House AI Regulatory Guidelines: ‘Remove Impediments to Private-sector AI Innovation’

January 9, 2020

When it comes to new technology, it’s been said government initially stays uninvolved – then gets too involved. The White House’s guidelines for federal a Read more…

By Doug Black

IBM Touts Quantum Network Growth, Improving QC Quality, and Battery Research

January 8, 2020

IBM today announced its Q (quantum) Network community had grown to 100-plus – Delta Airlines and Los Alamos National Laboratory are among most recent addition Read more…

By John Russell

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

Summit Has Real-Time Analytics: Here’s How It Happened and What’s Next

October 3, 2019

Summit – the world’s fastest publicly-ranked supercomputer – now has real-time streaming analytics. At the 2019 HPC User Forum at Argonne National Laborat Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This