Fast Forward: Five HPC Predictions for 2018

By John Russell

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Exascale Computing Project leadership shuffling? AMD’s return from the dead in the datacenter? Scandal at PEZY? Aurora’s stumble? Trump? There’s lots to choose from.

Whether you’re thinking ‘good riddance’ or ‘stay a little longer’ about 2017 – it feels like a year where there’s not a lot in between. It’s probably best focus on the future. Here are a few 2018 predictions mostly accenting the positive; indeed there is quite a bit to be positive about amid ever present dark clouds. Along the way there’s a few observations about 2017, and links to HPCwire coverage of note.

1. Big Blue Gets Its Mojo Back

Let’s be candid. Since dumping its x86 business, IBM has endured a bumpy ride. Building a new ecosystem has a definite “what were we thinking” level of difficulty. That doesn’t mean it can’t be done. OpenPOWER and IBM have done much that’s right, but getting to payoff is a costly, painful struggle. Power8 systems, despite the much-praised Power instruction set and scattered public support from systems builders and hyperscalers, mostly fizzled; some would argue it was swamped by timing and anticipation of Power9. Pricing may also have played a role.

Three events now seem poised to reenergize IBM and OpenPOWER.

  • First is arrival of the Power9 processor in December. It’s being promoted as a from-the-ground-up AI optimized chip able to leverage all kinds of accelerator (FPGA, GPU, etc.) high-speed interconnect (NVLink, OpenCAPI, etc.), and high memory bandwidth technology. It’s available in IBM’s AC922 server based on the same architecture as the Department of Energy CORAL supercomputers, Summit and Sierra. The Power9 wait is over.
  • Second, the Aurora project being led by Intel has been delayed. True, it is now scheduled to be the first U.S. exascale machine, deployed in 2021 at Argonne National Laboratory, but it clearly missed its mark as one of the scheduled pre-exascale machines. There’s also an open question as to which processor will be used for Aurora. And 2021 still seems quite distant. Overall Aurora’s trouble is IBM’s serendipity.
  • Third, expectations are high the IBM Summit machine will be stood up and tested in time to top the next Top500 list in June 2018. It’s expected to hit 150-200 petaflops peak performance. That would be a huge boost for IBM and its advanced Power architecture from a public awareness perspective. China has dominated the recent lists (ten consecutive ‘wins’) and the top performing U.S. machine, Titan, fell from for fourth to fifth in November. BTW, Titan is powered by AMD Opteron processors.
IBM Power9 AC922 rendering

IBM will attack the market in force with its ‘Summit-based’ servers. It will also likely get stronger buy-in from the OpenPOWER community, most of whom must still support Intel systems. Power9 system price points are also expected to be more attractive. Finally, with U.S. national competitiveness juices bubbling – amplified by Trump’s ‘America First’ mantra – the current U.S. administration is likely to talk up the IBM Top500 Summit achievement.

Bottom line: Big Blue will start making early hay in the server market (HPC and otherwise) after what must seem like a very long growing season. Time to start the harvest. Also, let’s not forget IBM is a giant in computing generally with a growing cloud business, a rapidly advancing quantum computing research and commercial program, a neuromorphic chip and research effort, and extensive portfolio of storage, software, mainframe, and services offerings. (For me, the jury is still out on Watson.) Big Blue is getting its mojo back.

Links to relevant articles:

IBM Begins Power9 Rollout with Backing from DOE, Google

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

Flipping the Flops and Reading the Top500 Tea Leaves 


2. AMD’s Datacenter Revival Looks Good – Don’t Blow It!

AMD has had many lives and it’s crossed swords with Intel over x86 matters (technology and markets) with regularity. Sometimes enough is enough. The company largely abandoned the datacenter a few years ago for a number of reasons. David versus Goliath doesn’t always end well for David. This year AMD has plunged back in and its bet is a big one that encompasses solid technology, price performance, and as of SC17, considerable commitment from some important systems makers to support the EPYC processor line.

“It’s not enough to come back with one product, you’ve got to come back with a product cadence that moves as the market moves. So not only are we coming back with EPYC, we’re also [discussing follow-on products] so when customers move with us today on EPYC they know they have a safe home and a migration path with Rome,” said Scott Aylor, AMD corporate VP and GM of enterprise solutions business, at the time of the launch.

Lower cost is clearly part of the strategy and AMD has been touting cost-performance comparisons. A portion of the EPYC line has been designed for single socket servers which are nearly extinct in the datacenter these days. AMD argues that around 50 percent of the market buys two-socket solutions because there was no alternative; now, says AMD, there is. In fact, Microsoft Azure recently announced an instance based on a single socket EPYC solution.

To meet a broad range of applications, AMD is tiering products in 32, 24, and 16-core ranges. The top end is aimed at scale out and HPC workloads. Indeed, AMD showcased ‘Project 47 supercomputer’ at SIGGRAPH in the summer which is based on the EPYC 7601 and AMD Radeon Instinct MI25 GPUs. A full 20-server rack of P47 systems achieves 30.05 gigaflops per watt in single-precision performance, but is less impressive on double precision arithmetic.

Bottom line: AMD is back, at least for now, charging into the datacenter.

Links to relevant articles:

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

AMD Stuffs a Petaflops of Machine Intelligence into 20-Node Rack

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration


3. The Quantum Computing Haze will Thicken Not Thin

Ok, I admit it. Quantum computing pretty much baffles me. Despite the mountain of rhetoric surrounding quantum computing, I suspect am not alone. Universal Quantum Computers. Quantum Adiabatic Annealing Computers. An expanding zoo of qubit types. Qudits. Quantum simulation on classical computers. Quantum Supremacy. Google, Microsoft, IBM, D-wave, a handful of academic and national lab quantum computing programs. Something called the Chicago Quantum Exchange under David Awschalom, associated with UChicago, Argonne, Fermilab, and located in the Institute for Molecular Engineering.

Feynman would chuckle. The saying is ‘where there’s smoke there’s fire’ and while that’s true enough, the plentiful smoke around quantum computing today is awfully hard to see through. Obviously there is something important going on but how important or when it will be important (let alone mainstream) is very unclear.

An IBM cryostat wired for a prototype 50 qubit system. (PRNewsfoto/IBM)

Philip Ball’s recent Nature piece on quantum supremacy (Race for quantum supremacy hits theoretical quagmire, Nature, 11/14/17) is both informative and entertaining. Quantum supremacy is the stage at which the capabilities of a quantum computer exceed those of any available classical computer. Of course the latter keep advancing.

Ball wrote, “Computer scientists and engineers are rather more phlegmatic about the notion of quantum supremacy than excited commentators who foresee an impending quantum takeover of information technology. They see it not as an abrupt boundary but as a symbolic gesture: a conceptual tool on which to peg a discussion of the differences between the two methods of computation. And, perhaps, a neat advertising slogan.”

Actually there’s a fair amount of good literature on quantum computing. Just a few of the current challenges include size (how many qubits) of current machines, needed error correction, nascent software, decoherency, exotic machines – think supercooled superconductors as an example – optimum qubit types…You get the idea.

Bottom line: The haze surrounding quantum computing’s future won’t lift for a few more years. Maybe a few specific quantum communication applications will emerge sooner.

Links to relevant articles:

Microsoft Wants to Speed Quantum Development

House Subcommittee Tackles US Competitiveness in Quantum Computing

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

IBM Breaks Ground for Complex Quantum Chemistry

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

IBM Launches Commercial Quantum Network with Samsung, ORNL


4. AI will Continue Sucking the Air Out of the Room

Maybe this is a good thing. AI writ large is blanketing the computer landscape. Its language is everywhere and dominating the marketing landscape. Every vendor, it seems, has an AI box or service or chip(s). More interesting is what’s happening in developing and using ‘AI’ technology. The CANcer Distributed Learning Environment (CANDLE) project – tasked with developing deep learning tools for the war on cancer and putting them to use – is a good example; it has released the early version of its infrastructure on GitHub. This includes algorithms, frameworks, and all manner of relevant tools.

CANDLE has already developed a model able to predict tumor response to drug pairs for a particular cancer type at 93 percent. The data sets are huge and machine learning is the only way to chew through them to build models. It’s working on a model to handle triplet drug combos. “There will be drugs I predict in clinical trials based on the results that we achieve this year,” Rick Stevens, one of the PIs on CANDLE and a senior researcher at Argonne National Laboratory, told HPCwire at SC17.

There’s a wealth of new (and some rather old data analytics) technology to support AI. New frameworks. Advancing accelerator technology. The rise of mixed precision machines – Japan’s plans for a 130-petaflops (half-precision) supercomputer by early 2018 called ABCI for AI Bridging Cloud Infrastructure is a good high-end example. There’s too much to cover here beyond saying AI is a game changer on its own for many applications and will also prove to be incredibly powerful in speeding up traditional floating-point intensive HPC applications such as molecular modeling.

In a memo to employees this week, Intel CEO Brian Krzanich wrote, “It’s almost impossible to perfectly predict the future, but if there’s one thing about the future I am 100 percent sure of, it is the role of data. Anything that produces data, anything that requires a lot of computing.” AI computing will be an important part of nearly all computing going forward.

Bottom line: Brace for more AI.

Links to relevant articles:

Japan Plans Super-Efficient AI Supercomputer

AI Speeds Astrophysics Image Analysis by 10,000x

Cray Brings AI and HPC Together on Flagship Supers

Nvidia CEO Predicts AI ‘Cambrian Explosion’

Intel Unveils Deep Learning Framework, BigDL


5. The HPC Identity Crisis will Continue in Force (Does it Matter?)

Ok, a better phrasing is what constitutes HPC today and do we even know how many HPC workers there are? We talk about this inside HPCwire all the time. The blending (broadening) of HPC with big data/AI computing is one element. Simple redefinition by fiat is another. Various constituents offer differing perspectives.

When someone says HPC it means something really specific to traditional HPC folks; it’s tightly coupled, we’ve got some sort of low latency interconnect, parallel file systems, designed to run high performance, highly scalable custom applications. But today, this has changed. HPC has come to mean pretty much any form of scientific computing and as a result, its breadth has grown in terms of what kind of applications we need to support.” – Gregory Kurtzer, Singularity (HPC container software).

Hyperion Research pegs the number of HPC sites in the U.S. at 759 (academic, government, commercial) and suggests there could be around 120,000 HPCers in the U.S. and perhaps a quarter of a million worldwide.

Making sense of the collision between traditional HPC and big data (and finding ways to harmonize the two) has been a hot topic at least since 2015 when it was identified as an objective in National Strategic Computing Initiative. There’s even been a series of five international workshops (in US, Japan, the EU and China) on Big Data and Extreme-scale Computing (BDEC) and Jack Dongarra and colleagues working on the project have just issued a reportPathways to Convergence: Towards a Shaping Strategy for a Future Software and Data Ecosystem for Scientific Inquiry. HPCwire will dig into the report’s findings at a subsequent time.

The point here is that change is overwhelming how HPC is looked at and what it is considered to be. HPC census and market sizing is an ongoing challenge. One astute industry observer noted:

“The idea of framing out the real HPC TAM (total available market) is an interesting one.  If I live in a big DoE facility and run code on the Titan HPC, I know I am an HPC guy. But if I am a car part designer that subs to GM, who uses Autodesk for visualization for the design of a driver’s side mirror, I may not think of myself as such (I sure as hell will not attend  SC17) .

“That and the fact that I saw so many vendors at SC that have products that address some of the less technically aggressive aspects of HPC (i.e. tape storage) that really aren’t HPC specific but that can be relevant to HPC users. So it’s hard to say what the TAM is because reaching out to customers who may be HPC, but don’t move in the HPC world per se is complicated at best.

“Even worse, figuring out how to count marketing dollars that reaches some indeterminant percentage of a loosely defined HPC market is fraught with intrigue.”

Bottom line: The HPC Who-am-I? pathos will continue in 2018 but preoccupation with delivering AI will mute some of the debate.


6. Lesser but Still Interesting 2018/2017 Glimpses

Doug Kothe, ECP director

The container craze will continue because it solves a real problem. ECP, now led by Doug Kothe, will shift into its next gear as the first U.S pre-exascale machines are stood up. Forget the doubters – the Nvidia juggernaut will keep rolling, though perhaps there won’t be another V100-like blockbuster introduced in 2018. Intel’s impressive Skylake chip line arrived and is in systems everywhere. Vendors’ infatuation with selling so-called easier-to-deploy HPC solutions into the enterprise – think vertical solutions – will fade; they’ve tried selling these but mostly without success for many reasons.

ARM will continue its march into new markets. This topic doesn’t rise to greater prominence here because we still need to see more systems on line, whether at the very high end such as the post K computer, or sales of ARM server systems such as HPE’s recently introduced Apollo 70, the company’s first ARM-based HPC server. The earlier ARM-based Moonshot offering fared poorly.

Unexpected scandal marred the end of the year with the arrest of PEZY founder, president and CEO Motoaki Saito and another PEZY employee, Daisuke Suzuki, on suspicion of defrauding a government institution of 431 million yen (~$3.8 million) and is unsettling. HPC seems reasonably free of such misbehavior. Maybe that’s my misperception.

It was sad to see what amounts to the end of the line for SPARC with Oracle’s discontinuance of development efforts and related layoffs.

On a positive note: There’s a new book from Thomas Sterling, professor of electrical engineering and director of the Center for Research in Extreme Scale Technologies, Indiana University – High Performance Computing: Modern Systems and Practices, co-written with colleagues Matthew Anderson and Maciej Brodowicz. It’s available now (link to publisher: https://www.elsevier.com/books/high-performance-computing/sterling/978-0-12-420158-3?start_rank=1&sortby=sortByDateDesc&imprintname=Morgan%20Kaufmann).

As always, there was a fair amount of personnel shuffling this year. Diane Bryant left Intel and joined Google. AI pioneer Andrew Ng left his post at Baidu. Intel lured GPU designer Raja Koduri from AMD; he was SVP and chief of architecture for Radeon Technology Group. Meg Whitman is stepping down as chairman of HPE – the company she helped bring into existence by overseeing the spit up of HP a year ago – and will be succeeded by Antonio Neri.

Obviously there is so much more to talk about. The HPC world is a vibrant, fascinating place, and tremendous force in science and society today.

Happy holidays and a hopeful new year to all. On to 2018!

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Automated Optimization Boosts ResNet50 Performance by 1.77x on Intel CPUs

October 23, 2018

From supercomputers to cell phones, every system and software device in our digital panoply has a growing number of settings that, if not optimized, constrain performance, wasting precious cycles and watts. In the f Read more…

By Tiffany Trader

South Africa CHPC: Home Grown Dynasty

October 22, 2018

Before the build up to the final event in the 2018 Student Cluster Competition season (the SC18 competition in Dallas), I want to take a moment to write about one of the great inspirational stories of these competitions. Read more…

By Dan Olds

NSF Launches Quantum Computing Faculty Fellows Program

October 22, 2018

Efforts to expand quantum computing research capacity continue to accelerate. The National Science Foundation today announced a Quantum Computing & Information Science Faculty Fellows (QCIS-FF) program aimed at devel Read more…

By John Russell

HPE Extreme Performance Solutions

One Small Step Toward Mars: One Giant Leap for Supercomputing

Since the days of the Space Race between the U.S. and the former Soviet Union, we have continually sought ways to perform experiments in space. Read more…

IBM Accelerated Insights

Join IBM at SC18 and Learn to Harness the Next Generation of AI-focused Supercomputing

Blurring the lines between HPC and AI

Today’s high performance computers are helping clients gain insights at an unprecedented pace. The intersection of artificial intelligence (AI) and HPC can transform industries while solving some of the world’s toughest challenges. Read more…

Democratization of HPC Part 3: Ninth Graders Tap HPC in the Cloud to Design Flying Boats

October 18, 2018

This is the third in a series of articles demonstrating the growing acceptance of high-performance computing (HPC) in new user communities and application areas. In this article we present UberCloud use case #208 on how Read more…

By Wolfgang Gentzsch and Håkon Bull Hove

Automated Optimization Boosts ResNet50 Performance by 1.77x on Intel CPUs

October 23, 2018

From supercomputers to cell phones, every system and software device in our digital panoply has a growing number of settings that, if not optimized, constrain  Read more…

By Tiffany Trader

South Africa CHPC: Home Grown Dynasty

October 22, 2018

Before the build up to the final event in the 2018 Student Cluster Competition season (the SC18 competition in Dallas), I want to take a moment to write about o Read more…

By Dan Olds

Penguin Computing Launches Consultancy for Piecing AI Strategies Together

October 18, 2018

AI stands before the HPC industry as a beacon of great expectations, yet market research repeatedly shows that AI adoption is commonly stuck in the talking phas Read more…

By Tiffany Trader

When Water Quality—Not Quantity—Hinders HPC Cooling

October 18, 2018

Attention has been paid to the sheer quantity of water consumed by supercomputers’ cooling towers – and rightly so, as they can require thousands of gallons per minute to cool. But in the background, another factor can emerge, bottlenecking efficiency and raising costs: water quality. Read more…

By Oliver Peckham

Paper Offers ‘Proof’ of Quantum Advantage on Some Problems

October 18, 2018

Is quantum computing worth all the effort being poured into it or should we just wait for classical computing to catch up? An IBM blog today posed those questio Read more…

By John Russell

Dell EMC to Supply U Michigan’s Great Lakes Cluster

October 16, 2018

The University of Michigan (U-M) today announced Dell EMC is the lead vendor for U-M’s $4.8 million Great Lakes HPC cluster scheduled for deployment in first Read more…

By John Russell

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Nvidia Platform Pushes GPUs into Machine Learning, High Performance Data Analytics

October 10, 2018

GPU leader Nvidia, generally associated with deep learning, autonomous vehicles and other higher-end enterprise and scientific workloads (and gaming, of course) Read more…

By Doug Black

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Leading Solution Providers

HPC on Wall Street 2018 Booth Video Tours Playlist

Arista

Dell EMC

IBM

Intel

RStor

VMWare

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Aerodynamic Simulation Reveals Best Position in a Peloton of Cyclists

July 5, 2018

Eindhoven University of Technology (TU/e) and KU Leuven research group conducts the largest numerical simulation ever done in the sport industry and cycling discipline. The goal was to understand the aerodynamic interactions in the peloton, i.e., the main pack of cyclists in a race. Read more…

No Go for GloFo at 7nm; and the Fujitsu A64FX post-K CPU

September 5, 2018

It’s been a news worthy couple of weeks in the semiconductor and HPC industry. There were several HPC relevant disclosures at Hot Chips 2018 to whet appetites Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This