No this isn’t about the song from Charlotte’s Web or the Scandinavian predilection for open sandwiches; it’s about the apparent newfound choice in the HPC CPU market.
For the first time since AMD’s ill-fated launch of Bulldozer the answer to the question, ‘Which CPU will be in my next HPC system?’ doesn’t have to be ‘Whichever variety of Intel Xeon E5 they are selling when we procure’.
In fact, it’s not just in the x86 market where there is now a genuine choice. Soon we will have at least two credible ARM v8 ISA CPUs (from Cavium and Qualcomm respectively) and IBM have gone all in on the Power architecture (having at one point in the last ten years had four competing HPC CPU lines – x86, Blue Gene, Power and Cell).
In fact, it may even be Intel that is left wondering which horse to back in the HPC CPU race with both Xeon lines looking insufficiently differentiated going forward. A symptom of this dilemma is the recent restructuring of the Xeon line along with associated pricing and feature segmentation.
I’m also quite deliberately avoiding the potentially disruptive appearance of a number of radically different computational solutions being honed for machine learning and which will inevitably have some bearing on HPC in the future.
Have we seen peak Intel?
Intel’s 90+ percent market share in the datacentre has for years worried many observers. While their products have undoubtedly been very good, when you have an effective monopoly, the evolutionary pressure that drives innovation and price competitiveness understandably wanes.
“Success breeds complacency. Complacency breeds failure. Only the paranoid survive.” – Andy Grove”
The re-emergence of credible competition can only be a good thing for the wider market, but in HPC things are less clear cut. Intel still holds a strong hand in the game of poker that is HPC procurement, namely AVX-512, but since some of the larger Top500 systems tend to be heterogeneous in nature, is this going to be enough to fend off the challenge from the following pack in other parts of the HPC ecosystem?
IBM and Nvidia are clearly hoping to make significant to make inroads at the top table of HPC with their CORAL generation systems, and Qualcomm and Cavium will also be hoping to chip away at Intel’s monopoly (though they are probably not directly aiming at HPC) but these non-x86 alternatives face significant problems when it comes to showing their capabilities in the HPC space.
AMD have a great opportunity to make gains in the HPC space with their EPYC line (the only x86 competitor) and early signs are encouraging that they will take the fight to Intel and not just on price-performance grounds.
Inertia in HPC is a funny thing
We mainly think of inertia as a property of physical objects but in the HPC industry there is a similar phenomenon relating to application code bases (and languages), instruction sets (and optimised software library ecosystems) and how hard it is to justify doing something different. In the case of HPC, this is really an argument about the barrier to entry for the new HPC CPU vendors, and what they have to be able to demonstrate in order to displace the incumbent (i.e. Intel).
Without trying to evade answering the question, we all hope that the non-Intel vendors can find the right combination of price-performance to chip away at the current Intel dominance in the datacentre. Not because we want to see Intel fail, but because we want them to succeed. Healthy competition is definitely good for users, though less obviously so for Intel’s shareholders.
If all you have is a hammer
“Ah-ha!” I hear you cry, “We already embrace different ISAs and heterogeneity in the Top500.” and indeed we do. In fact the latest Green500 list is testament to how effective this approach can be. We also know that LINPACK is a historically poor predictor for most actual HPC application performance but we still use it as a flagship benchmark, predominantly because it does a good job of stress testing the computational elements of system architecture. With the march towards exascale now looking more like the retreat from Moscow, there is increasing need to improve the system efficiency for applications that don’t exhibit LINPACK-esque scaling characteristics. Machine learning looks to be the new yardstick so it will be interesting to see the rapid evolution of new solutions and benchmarks.
Moore’s Law in ICU
We should also acknowledge the increasing challenges facing silicon fabrication and process technology. Keeping the Moore’s Law show on the road is hard. This isn’t news to folk in HPC but it is one of the reasons why exascale in under 20MW (anything else looks prohibitively expensive) looks to be an exceedingly challenging goal in the next five years.
Intel are still at the vanguard when it comes to eking out the increasingly esoteric improvements needed, but when you have to re-state what aspects of process naming conventions should matter, you are already rapidly approaching the point of diminishing returns.
Moore’s law is an engine that has historically driven significant growth across the board and enabled the in silico renaissance that most HPC users are engaged in, but it is faltering at just the moment that exascale computing systems need a significant uplift in system efficiency. There still need to be huge improvements in parallelism, memory and storage efficiency, and data transmission and that’s even before you start to consider the considerations around fault recovery and software complexity for such huge systems.
We’ve been fairly good at scrambling over the various ‘walls’ we’ve encountered in the last couple of decades but does anyone else have a feeling that we are at the cusp of a period of innovation in HPC that we haven’t seen for some time?
Benchmark, benchmark, benchmark
For the first time in at least five years, the need for comparative benchmarking, conducted as part of your pre- and tender process, is looking to be an absolutely essential step to deliver the best value. Rather than just being viewed as something that provides a little more confidence that the vendors have tuned the MPI implementation and fabric topology, and you know what compiler flags to flip, it will shine a light into some of the dark musty corners that more complacent software developers and vendors have chosen to ignore. If for no other reason it will ensure that the supported pricing you get from your suppliers will be as keen as it should be.
Dairsie Latimer is a Managing Consultant for Red Oak Consulting.