Intel used the latest MLPerf Inference (version 3.1) results as a platform to reinforce its developing “AI Everywhere” vision, which rests upon 4th gen Xeon CPUs and Gaudi2 (Habana) accelerators. Both fared well on the latest MLPerf exercise, which featured a new large language model (GPT-J) as part of the benchmark suite. Neither bested Nvidia H100 submissions, but that wasn’t the goal according to Intel.
The broad Intel mantra is that H100’s scarcity and high cost make Gaudi2 an attractive accelerator alternative, which when combined with the latest Intel CPUs – 4th gen Xeon and Xeon CPU Max Series with in-package HBM – cover the full gambit it AI (training and inference) requirements. This portfolio underpins the Intel AI Everywhere strategy.

Jordan Plawner, senior director of Intel AI product management and strategy, told HPCwire, “We have people who want to do AI in every kind of way and on every chip that we have. The examples set by MidjourneyAI, and OpenAI, has lit a fire under dozens of companies, if not hundreds to develop their own [capabilities]. There’s this huge gold rush for accelerated systems and clusters in the near term.”
“Consistent with the last results on MLPerf training data, Gaudi2 to continues to outperform the [Nvidia] A100. I like to remind people that a year ago, the A100 was state of the art and everyone had to have it. We’re very proud that Gaudi2 is beating the A100 and we’re hearing from the market that the perception is that we’re the only viable alternative to Nvidia. The market needs a second source. [Intel] is busy meeting with customers and Pat [Gelsinger, Intel CEO], has made it an Intel mission to go get Gaudi2 design wins. We can’t wait to share those publicly,” he said.
It is, without doubt, distinctly odd to hear Intel tout its Gaudi2 AI accelerator and 4th gen Xeon CPU AI capabilities as desirable because Nvidia H100 GPUs are scarce, costly, and functionally limited to top of the AI pyramid applications, but that does capture the basic pitch. No doubt Nvidia would tell a different story and it clearly remains the dominant GPU provider. Citing Gartner numbers, which peg the AI semiconductor market at $53 billion for 2023, Plawner describes the AI gold rush as a barbell with two bulging ends, one for dedicated AI needs and one for integrated, intermittent needs.
While not as performant as the Nvidia H100, Gaudi2 is sufficiently performant and increasingly available to satisfy most applications, said Plawner. Bear in mind Gaudi2 is only available as part of a system and not as a standalone part.
“We’re ramping Gaudi2 in terms of supply. We have product coming in every week and have been very aggressive at placing those POs ahead of the demand [to prevent] long lead times. I think that’s where the H100 is stuck and why are customers eager to use Gaudi2 even though it doesn’t beat H100, hands-down. I think it’s lack of supply of H100 and I think it’s the desire to see Intel successful. We’ve had the feedback that we have met the threshold of what the customers say, is good enough to go work with you,” said Plawner.
“We don’t quote Gaudi2 pricing. You’d have to go to Supermicro, for example, and see what their price is for system. But in generally what we’ve seen on the system pricing is about parity between the A100 systems and Gaudi2 systems,” said Plawner, citing the use of InfiniBand cards in DGX or HGX systems versus an “overall lower cost architecture because Ethernet and the RDMA stack is native inside the chip.”
In a prepared statement, Intel said, “We delivered the GPT-J inference results with the FP8 data type enabled on Gaudi2, delivering excellent accuracy and speed. Over the coming weeks, we plan to expand our FP8 support for multiple inference models, and add support for training shortly after that.” FP8 support was not yet part of the publicly available stack in June’s MLPerf training benchmark.

On the CPU side, Intel remains the only CPU vendor to enter its CPUs in MLPerf. Besides its own submissions, Dell, HPE, and Quanta Cloud Technology all had entrants using Intel CPUs for inference processing:
- Dell PowerEdge Server R760 (1x Intel Xeon Platinum 8480+)
- HPE, 1-node-2S-SPR-PyTorch-INT8, (Intel Xeon Platinum 8480+
- Quanta Cloud Technology, 1-node-2S-SPR-PyTorch-INT8, (Intel Xeon Platinum 8480+ 56-Core Processor)
Like others, Intel foresees a huge market for smaller scale AI applications. “This goes for inferencing, as well as fine tuning –that stage and between training and inferencing, where we’re just taking a model, and only changing the last few layers and really distilling down that technology and compressing it. You can run that on Xeon or Xeon Max series, which is the Xeon that has HBM in package. This is for building and deploying at enterprise scale, [but] generally think of smaller models doing specific, targeted applications,” he said.
About the Xeon Max Series, he said, “The socket that has HBM in the package, so it’s kind of like a hybrid; it’s not quite an accelerator, because it’s not a dedicated piece of silicon to a single function, and it’s not quite a standard Xeon because it has the memory in package. So, for memory-bound workloads, that need to have caching for the data or algorithms or models and keep it really close to the compute. This is the first MLPerf submission using the Max Series and we now have a quality AI stack.”
Link to Intel blog, https://www.intel.com/content/www/us/en/newsroom/news/intel-shows-strong-ai-inference-performance.html#gs.5vn1sq