What’s Hot and What’s Not at ISC 2018?

By Dairsie Latimer

June 22, 2018

Editor’s note: Tick tock. It’s the final countdown to one of the most well-attended HPC events of the year. Red Oak’s Dairsie Latimer shares his perspective on the standout themes in play.

As the calendar rolls around to late June we see the ISC conference, held in Frankfurt (June 24th-28th), heave into view. With some of the pre-show announcements already starting to roll out, what do we think some of the main talking points will be next week?

There’s already been some press around the Summit machine at ORNL, with the traditional peak DP FLOPS (and the associated Linpack run for the Top500) taking a slight back seat to the new maths of exaops (counted in single and reduced precision FLOPS). Apart from some dubious interpretation of the numbers from the perspective of us HPC types, the decision to quote figures in exaops (as opposed to DP FLOPS) does represent an apparent symbolic shift in emphasis for scientific computing.

The apparent press confusion, over how fast and by what margin Summit would lead the Chinese competition when the Top500 list is announced next week, was probably fuelled as much by jingoism as any practical confusion over the metrics for comparison.

There’s been plenty of commentary around the new accounting methods (in terms of claiming bragging rights) but it does raise a serious point – workloads are becoming more diverse and mixed precision is a good thing, especially for ML and DL. The confluence of traditional HPC simulation, big data analytics and ML with DL thrown in for good measure, means the relevance of LINPACK as a benchmark is more tenuous than ever. We’re back to the mantra, “Benchmark. Benchmark. Benchmark again.”, and not just purely synthetic workloads. Do it with your own applications and real data if you can. Turn on all your IO, job profiling and stats gathering because they can have a surprising effect on actual performance.

Prof. Bryan Lawrence[1] has an interesting recent presentation to the EuroHPC Requirements Workshop where he discusses amongst other things the performance metrics (measure of speed) in the context of Climate and Weather science (but the concept can easily be extended to all other domains). His very reasonable contention is that what users really care about is Simulated Years Per (real) Day (SYPD) or equivalent and that when you take into account the costs required to achieve different levels of performance at certain resolutions and with differing levels of complexity (numbers of physical parameters etc) you have populated the matrix which informs the scope of the procurement and likely technological approaches.

Anyway, I digress. We also have another interesting pre-announcement which is the system architecture of Sandia’s Astra (based on Cavium’s ThunderX2s and HPE Apollo 70s). It will be interesting to see where this system lands later in the year (somewhere just inside the November Top100 I suspect) but it will probably be one of the first major deployments (multi-petaflops) of an ARM based HPC system (but props to the GW4 collaborators in the UK for leading the way).

We have plenty of noise being made by non-Intel CPU vendors this June, which leads us to the inevitable twice yearly juggling of Intel’s Xeon product roadmap due to the issues surrounding their 10 nm process transition. Do they need 10 nm to make good CPUs? Manifestly not. But the multi-year delays 10 nm has experienced have probably been directly implicated in the mercy killing of Xeon Phi, the rejigging of the Aurora system architecture, as well as significant disruption to other product roadmaps, just at a time their potential competitors in the datacentre are starting to hit their stride.

Apparently Intel’s (now former) CEO Brian Krzanich has said to analysts “that it was Intel’s job not to let AMD capture 15-20% market share.” (See HPCwire coverage here.) Even in a growth market and assuming this is units shipped rather than revenue, that would still represent a pretty big dent in x86 revenues for Intel and that’s not factoring in the single digit slice that ARM and IBM are likely to capture. Perhaps this is a deliberately pessimistic assessment by Intel to help soften the market reaction in the next FY?

AMD while not yet making significant inroads into the datacentre have definitely started to chip away with the first generation Epyc CPUs, with projections in the 5 percent range by Q4 2018, and the latest 7 nm core will add some additional impetus given Intel’s problems delivering their equivalent 10 nm products.

Outside the x86 space, Intel have also been talking about Optane DIMMs and their decision to go back into the discrete GPU market again. The first is just late, but full of potential, the second seems like a distraction they can ill afford if it is really also targeted at the datacentre. Intel have quite a few irons in the fire, and at least on paper, and four competing (Intel would probably prefer the term complementary) ML/DL strategies. The Nervana acquisition has yet to bear fruit in the ML/DL space (now pushed back into 2019), the Xeon+FPGA strategy is likely to be too costly for deployment in general HPC procurements, Knights Mill was presumably also rounded up with the other Knights and quietly sent to the glue factory, so for the time being stock Xeon is a perfectly effective platform for inference workloads (actually using the trained models). That’s assuming you can work out how to deploy ML and DL using the plethora of competing frameworks (TensorFlow, Caffe2, CNTK, etc.) in a production environment. This brings us on neatly onto three of the most interesting topics at ISC from our perspective:

  • Beyond Moore’s Law
  • The Rise of Containerized HPC
  • Artificial Intelligence on HPC Platforms

The first hopefully will be an interesting summary of the changes that we as system and software engineers, as well as users will expect to see over the next three to five years, and how that will inevitably impact the HPC application software stack.

The last two are in my mind are currently intertwined. ML/DL is currently the Wild West of scientific and computer science research, with vast unexplored tracts of new territory and exciting things to see and discover. It’s also a lawless place, with various competing interests pushing ahead to further their own interests and agendas. The fact that these frameworks are churning so fast, means that they are practically impossible to deploy in any conventional HPC production environment.

This of course is where containerisation comes in and everyone I know will be tracking this with some interest. While Docker has the hearts and minds of the ML/DL researchers, because they typically work on single tenanted, or at least single use, GPU heavy boxes (think DGX systems from NVIDIA) where being root isn’t necessarily anathema. However no self-respecting HPC centre will deploy Docker on its general use HPC systems so that leaves Singularity as the most oft cited container platform of interest for production HPC systems. I’ll be really interested to see how software as infrastructure, containerisation, automated build and test, deployment and provisioning systems adapt over the next six to twelve months. I’m hoping that the Machine Learning day will cover or initiate serious discussions on at least some of these issues.

About the Author

Dairsie Latimer, Technical Advisor at Red Oak Consulting, has a somewhat eclectic background, having worked in a variety of roles on supplier side and client side across the commercial and public sectors as an consultant and software engineer. Following an early career in computer graphics, micro-architecture design and full stack software development, he has over twelve years’ specialist experience in the HPC sector, ranging from developing low-level libraries and software for novel computing architectures to porting complex HPC applications to a range of accelerators. Dairise joined Red Oak Consulting (@redoakHPC) in 2010 bringing his wealth of experience to both the business and customers.

[1] NCAS, University of Reading

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire