Fast Forward: Five HPC Predictions for 2018

By John Russell

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Exascale Computing Project leadership shuffling? AMD’s return from the dead in the datacenter? Scandal at PEZY? Aurora’s stumble? Trump? There’s lots to choose from.

Whether you’re thinking ‘good riddance’ or ‘stay a little longer’ about 2017 – it feels like a year where there’s not a lot in between. It’s probably best focus on the future. Here are a few 2018 predictions mostly accenting the positive; indeed there is quite a bit to be positive about amid ever present dark clouds. Along the way there’s a few observations about 2017, and links to HPCwire coverage of note.

1. Big Blue Gets Its Mojo Back

Let’s be candid. Since dumping its x86 business, IBM has endured a bumpy ride. Building a new ecosystem has a definite “what were we thinking” level of difficulty. That doesn’t mean it can’t be done. OpenPOWER and IBM have done much that’s right, but getting to payoff is a costly, painful struggle. Power8 systems, despite the much-praised Power instruction set and scattered public support from systems builders and hyperscalers, mostly fizzled; some would argue it was swamped by timing and anticipation of Power9. Pricing may also have played a role.

Three events now seem poised to reenergize IBM and OpenPOWER.

  • First is arrival of the Power9 processor in December. It’s being promoted as a from-the-ground-up AI optimized chip able to leverage all kinds of accelerator (FPGA, GPU, etc.) high-speed interconnect (NVLink, OpenCAPI, etc.), and high memory bandwidth technology. It’s available in IBM’s AC922 server based on the same architecture as the Department of Energy CORAL supercomputers, Summit and Sierra. The Power9 wait is over.
  • Second, the Aurora project being led by Intel has been delayed. True, it is now scheduled to be the first U.S. exascale machine, deployed in 2021 at Argonne National Laboratory, but it clearly missed its mark as one of the scheduled pre-exascale machines. There’s also an open question as to which processor will be used for Aurora. And 2021 still seems quite distant. Overall Aurora’s trouble is IBM’s serendipity.
  • Third, expectations are high the IBM Summit machine will be stood up and tested in time to top the next Top500 list in June 2018. It’s expected to hit 150-200 petaflops peak performance. That would be a huge boost for IBM and its advanced Power architecture from a public awareness perspective. China has dominated the recent lists (ten consecutive ‘wins’) and the top performing U.S. machine, Titan, fell from for fourth to fifth in November. BTW, Titan is powered by AMD Opteron processors.
IBM Power9 AC922 rendering

IBM will attack the market in force with its ‘Summit-based’ servers. It will also likely get stronger buy-in from the OpenPOWER community, most of whom must still support Intel systems. Power9 system price points are also expected to be more attractive. Finally, with U.S. national competitiveness juices bubbling – amplified by Trump’s ‘America First’ mantra – the current U.S. administration is likely to talk up the IBM Top500 Summit achievement.

Bottom line: Big Blue will start making early hay in the server market (HPC and otherwise) after what must seem like a very long growing season. Time to start the harvest. Also, let’s not forget IBM is a giant in computing generally with a growing cloud business, a rapidly advancing quantum computing research and commercial program, a neuromorphic chip and research effort, and extensive portfolio of storage, software, mainframe, and services offerings. (For me, the jury is still out on Watson.) Big Blue is getting its mojo back.

Links to relevant articles:

IBM Begins Power9 Rollout with Backing from DOE, Google

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

Flipping the Flops and Reading the Top500 Tea Leaves 


2. AMD’s Datacenter Revival Looks Good – Don’t Blow It!

AMD has had many lives and it’s crossed swords with Intel over x86 matters (technology and markets) with regularity. Sometimes enough is enough. The company largely abandoned the datacenter a few years ago for a number of reasons. David versus Goliath doesn’t always end well for David. This year AMD has plunged back in and its bet is a big one that encompasses solid technology, price performance, and as of SC17, considerable commitment from some important systems makers to support the EPYC processor line.

“It’s not enough to come back with one product, you’ve got to come back with a product cadence that moves as the market moves. So not only are we coming back with EPYC, we’re also [discussing follow-on products] so when customers move with us today on EPYC they know they have a safe home and a migration path with Rome,” said Scott Aylor, AMD corporate VP and GM of enterprise solutions business, at the time of the launch.

Lower cost is clearly part of the strategy and AMD has been touting cost-performance comparisons. A portion of the EPYC line has been designed for single socket servers which are nearly extinct in the datacenter these days. AMD argues that around 50 percent of the market buys two-socket solutions because there was no alternative; now, says AMD, there is. In fact, Microsoft Azure recently announced an instance based on a single socket EPYC solution.

To meet a broad range of applications, AMD is tiering products in 32, 24, and 16-core ranges. The top end is aimed at scale out and HPC workloads. Indeed, AMD showcased ‘Project 47 supercomputer’ at SIGGRAPH in the summer which is based on the EPYC 7601 and AMD Radeon Instinct MI25 GPUs. A full 20-server rack of P47 systems achieves 30.05 gigaflops per watt in single-precision performance, but is less impressive on double precision arithmetic.

Bottom line: AMD is back, at least for now, charging into the datacenter.

Links to relevant articles:

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

AMD Stuffs a Petaflops of Machine Intelligence into 20-Node Rack

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration


3. The Quantum Computing Haze will Thicken Not Thin

Ok, I admit it. Quantum computing pretty much baffles me. Despite the mountain of rhetoric surrounding quantum computing, I suspect am not alone. Universal Quantum Computers. Quantum Adiabatic Annealing Computers. An expanding zoo of qubit types. Qudits. Quantum simulation on classical computers. Quantum Supremacy. Google, Microsoft, IBM, D-wave, a handful of academic and national lab quantum computing programs. Something called the Chicago Quantum Exchange under David Awschalom, associated with UChicago, Argonne, Fermilab, and located in the Institute for Molecular Engineering.

Feynman would chuckle. The saying is ‘where there’s smoke there’s fire’ and while that’s true enough, the plentiful smoke around quantum computing today is awfully hard to see through. Obviously there is something important going on but how important or when it will be important (let alone mainstream) is very unclear.

An IBM cryostat wired for a prototype 50 qubit system. (PRNewsfoto/IBM)

Philip Ball’s recent Nature piece on quantum supremacy (Race for quantum supremacy hits theoretical quagmire, Nature, 11/14/17) is both informative and entertaining. Quantum supremacy is the stage at which the capabilities of a quantum computer exceed those of any available classical computer. Of course the latter keep advancing.

Ball wrote, “Computer scientists and engineers are rather more phlegmatic about the notion of quantum supremacy than excited commentators who foresee an impending quantum takeover of information technology. They see it not as an abrupt boundary but as a symbolic gesture: a conceptual tool on which to peg a discussion of the differences between the two methods of computation. And, perhaps, a neat advertising slogan.”

Actually there’s a fair amount of good literature on quantum computing. Just a few of the current challenges include size (how many qubits) of current machines, needed error correction, nascent software, decoherency, exotic machines – think supercooled superconductors as an example – optimum qubit types…You get the idea.

Bottom line: The haze surrounding quantum computing’s future won’t lift for a few more years. Maybe a few specific quantum communication applications will emerge sooner.

Links to relevant articles:

Microsoft Wants to Speed Quantum Development

House Subcommittee Tackles US Competitiveness in Quantum Computing

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

IBM Breaks Ground for Complex Quantum Chemistry

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

IBM Launches Commercial Quantum Network with Samsung, ORNL


4. AI will Continue Sucking the Air Out of the Room

Maybe this is a good thing. AI writ large is blanketing the computer landscape. Its language is everywhere and dominating the marketing landscape. Every vendor, it seems, has an AI box or service or chip(s). More interesting is what’s happening in developing and using ‘AI’ technology. The CANcer Distributed Learning Environment (CANDLE) project – tasked with developing deep learning tools for the war on cancer and putting them to use – is a good example; it has released the early version of its infrastructure on GitHub. This includes algorithms, frameworks, and all manner of relevant tools.

CANDLE has already developed a model able to predict tumor response to drug pairs for a particular cancer type at 93 percent. The data sets are huge and machine learning is the only way to chew through them to build models. It’s working on a model to handle triplet drug combos. “There will be drugs I predict in clinical trials based on the results that we achieve this year,” Rick Stevens, one of the PIs on CANDLE and a senior researcher at Argonne National Laboratory, told HPCwire at SC17.

There’s a wealth of new (and some rather old data analytics) technology to support AI. New frameworks. Advancing accelerator technology. The rise of mixed precision machines – Japan’s plans for a 130-petaflops (half-precision) supercomputer by early 2018 called ABCI for AI Bridging Cloud Infrastructure is a good high-end example. There’s too much to cover here beyond saying AI is a game changer on its own for many applications and will also prove to be incredibly powerful in speeding up traditional floating-point intensive HPC applications such as molecular modeling.

In a memo to employees this week, Intel CEO Brian Krzanich wrote, “It’s almost impossible to perfectly predict the future, but if there’s one thing about the future I am 100 percent sure of, it is the role of data. Anything that produces data, anything that requires a lot of computing.” AI computing will be an important part of nearly all computing going forward.

Bottom line: Brace for more AI.

Links to relevant articles:

Japan Plans Super-Efficient AI Supercomputer

AI Speeds Astrophysics Image Analysis by 10,000x

Cray Brings AI and HPC Together on Flagship Supers

Nvidia CEO Predicts AI ‘Cambrian Explosion’

Intel Unveils Deep Learning Framework, BigDL


5. The HPC Identity Crisis will Continue in Force (Does it Matter?)

Ok, a better phrasing is what constitutes HPC today and do we even know how many HPC workers there are? We talk about this inside HPCwire all the time. The blending (broadening) of HPC with big data/AI computing is one element. Simple redefinition by fiat is another. Various constituents offer differing perspectives.

When someone says HPC it means something really specific to traditional HPC folks; it’s tightly coupled, we’ve got some sort of low latency interconnect, parallel file systems, designed to run high performance, highly scalable custom applications. But today, this has changed. HPC has come to mean pretty much any form of scientific computing and as a result, its breadth has grown in terms of what kind of applications we need to support.” – Gregory Kurtzer, Singularity (HPC container software).

Hyperion Research pegs the number of HPC sites in the U.S. at 759 (academic, government, commercial) and suggests there could be around 120,000 HPCers in the U.S. and perhaps a quarter of a million worldwide.

Making sense of the collision between traditional HPC and big data (and finding ways to harmonize the two) has been a hot topic at least since 2015 when it was identified as an objective in National Strategic Computing Initiative. There’s even been a series of five international workshops (in US, Japan, the EU and China) on Big Data and Extreme-scale Computing (BDEC) and Jack Dongarra and colleagues working on the project have just issued a reportPathways to Convergence: Towards a Shaping Strategy for a Future Software and Data Ecosystem for Scientific Inquiry. HPCwire will dig into the report’s findings at a subsequent time.

The point here is that change is overwhelming how HPC is looked at and what it is considered to be. HPC census and market sizing is an ongoing challenge. One astute industry observer noted:

“The idea of framing out the real HPC TAM (total available market) is an interesting one.  If I live in a big DoE facility and run code on the Titan HPC, I know I am an HPC guy. But if I am a car part designer that subs to GM, who uses Autodesk for visualization for the design of a driver’s side mirror, I may not think of myself as such (I sure as hell will not attend  SC17) .

“That and the fact that I saw so many vendors at SC that have products that address some of the less technically aggressive aspects of HPC (i.e. tape storage) that really aren’t HPC specific but that can be relevant to HPC users. So it’s hard to say what the TAM is because reaching out to customers who may be HPC, but don’t move in the HPC world per se is complicated at best.

“Even worse, figuring out how to count marketing dollars that reaches some indeterminant percentage of a loosely defined HPC market is fraught with intrigue.”

Bottom line: The HPC Who-am-I? pathos will continue in 2018 but preoccupation with delivering AI will mute some of the debate.


6. Lesser but Still Interesting 2018/2017 Glimpses

Doug Kothe, ECP director

The container craze will continue because it solves a real problem. ECP, now led by Doug Kothe, will shift into its next gear as the first U.S pre-exascale machines are stood up. Forget the doubters – the Nvidia juggernaut will keep rolling, though perhaps there won’t be another V100-like blockbuster introduced in 2018. Intel’s impressive Skylake chip line arrived and is in systems everywhere. Vendors’ infatuation with selling so-called easier-to-deploy HPC solutions into the enterprise – think vertical solutions – will fade; they’ve tried selling these but mostly without success for many reasons.

ARM will continue its march into new markets. This topic doesn’t rise to greater prominence here because we still need to see more systems on line, whether at the very high end such as the post K computer, or sales of ARM server systems such as HPE’s recently introduced Apollo 70, the company’s first ARM-based HPC server. The earlier ARM-based Moonshot offering fared poorly.

Unexpected scandal marred the end of the year with the arrest of PEZY founder, president and CEO Motoaki Saito and another PEZY employee, Daisuke Suzuki, on suspicion of defrauding a government institution of 431 million yen (~$3.8 million) and is unsettling. HPC seems reasonably free of such misbehavior. Maybe that’s my misperception.

It was sad to see what amounts to the end of the line for SPARC with Oracle’s discontinuance of development efforts and related layoffs.

On a positive note: There’s a new book from Thomas Sterling, professor of electrical engineering and director of the Center for Research in Extreme Scale Technologies, Indiana University – High Performance Computing: Modern Systems and Practices, co-written with colleagues Matthew Anderson and Maciej Brodowicz. It’s available now (link to publisher: https://www.elsevier.com/books/high-performance-computing/sterling/978-0-12-420158-3?start_rank=1&sortby=sortByDateDesc&imprintname=Morgan%20Kaufmann).

As always, there was a fair amount of personnel shuffling this year. Diane Bryant left Intel and joined Google. AI pioneer Andrew Ng left his post at Baidu. Intel lured GPU designer Raja Koduri from AMD; he was SVP and chief of architecture for Radeon Technology Group. Meg Whitman is stepping down as chairman of HPE – the company she helped bring into existence by overseeing the spit up of HP a year ago – and will be succeeded by Antonio Neri.

Obviously there is so much more to talk about. The HPC world is a vibrant, fascinating place, and tremendous force in science and society today.

Happy holidays and a hopeful new year to all. On to 2018!

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understanding on January 10. The MOU represents the continuation of a 1 Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Tennessee), Satoshi Matsuoka (Tokyo Institute of Technology), Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown and Spectre security updates on the performance of popular H Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre Read more…

By Carlos Aoki Thomaz

UCSD, AIST Forge Tighter Alliance with AI-Focused MOU

January 18, 2018

The rich history of collaboration between UC San Diego and AIST in Japan is getting richer. The organizations entered into a five-year memorandum of understandi Read more…

By Tiffany Trader

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This