Dell may have inadvertently revealed that it has high-performance computing services on its roadmap for its Apex multicloud services.
“If I was looking downstream I’d look for Apex versions of HPC,” said Jeff Clarke, vice chairman and co-chief operating officer at Dell, in a press conference on Tuesday that was livestreamed.
Project Apex is Dell’s multicloud strategy, which is a combination of hardware and software products to aggregate clouds, storage, services and hardware over far flung areas to behave as a single system. Dell on Tuesday also announced Project Frontier, which makes it easier to plug edge devices such as robots, video surveillance and other sensor equipment into the company’s multicloud environments.
The comment by Clarke surprised the moderator, JJ Davis, who is senior vice president of corporate affairs, and she tried to reframe it as a guess by Clarke. “Pre-announced and all kinds of stuff,” Davis said.
Clarke took JJ Davis’ comment in stride, and tried to reframe his comment, but later doubled down on bringing HPC to Apex as part of a wider offering that will reach out to all kinds of devices.
“There’ll be PC versions of this. Notebooks and desktops, whether it’s vertical, whether it’s a VDI solution or an HPC solution, whether it’s the horizontal capabilities, whether we talked about extensions into public cloud, this is what we’re building,” Clarke said.
“So while we’re having a little fun with it, it’s pre-disclosing – this is the direction of where we’re going with the capabilities of our company in this multicloud world,” Clarke said.
A few sentences later, JJ Davis cut off Clarke just as he was getting started with another sentence, perhaps to limit the damage of any more pre-disclosures.
But Clarke, and later, CEO Michael Dell repeated that high-performance computing was an important market for Dell from a technological and financial perspective.
“They tend to buy real capable servers with lots of memory, with lots of GPU capability, we like that business. We’ll continue to participate,” Clarke said.
CEO Dell said that the HPC sector was driving the golden age of processor architecture, with systems going beyond CPUs, and tasks being offloaded to alternative chips like GPUs.
“Think of it as well beyond the CPU with DPUs and QPUs – all sorts of offload engines that are addressing this explosion in data and the incredible advancements in computer science to deal with that data,” Dell said.
Addison Snell, the CEO of supercomputing research firm Intersect360 Research, noted that Dell has significant supercomputing installations of its own. On the current Top500 list, Dell has the largest supercomputer at a commercial customer site, which is the 12th fastest supercomputer called HPC5, which is at Eni in Italy. It also has largest academic supercomputer, which is the sixteenth fastest supercomputer called Frontera, which is deployed at the Texas Advanced Computing Center in Austin.
“Dell often gets underappreciated for the magnitude of its contributions to the HPC market, probably because it hasn’t had the cachet of the leading-edge Department of Energy sales that HPE has thanks to its acquisition of Cray,” Snell said.
HPC has always been at the tip of the spear for computer architecture, Dell said. He gave the example of the Stampede supercomputer at University of Texas, Austin, which went up in 2013 and was once the sixth-fastest supercomputer in the world. The system was developed jointly by Intel and Dell system and retired in 2017.
“[HPC has] been incredibly important for us and … UT Stampede clusters are a great example of that,” Dell said.
After Dell was finished, Clarke chimed in again on the importance of HPC to Dell. There’s been an explosion of data, and HPC-style compute will be needed to handle that, Clarke said.
“What you find is an architectural shift that’s happening. You’re going to see the compute resources and storage resources follow where the data is created,” Clarke said.
There will be a distribution of compute and storage resources to follow the data, which will scale up the computing needs. A lot of data is already being created on the edge, which is feeding the need for high-performance computing, especially for learning models.
“You’re going to be doing real-time processing of data on the edge to drive better outcomes there and these two worlds have to connect,” Clarke said.
Dell has the application engineering and services for this space but it is missing a homegrown interconnect, Snell said. HPE’s big DOE wins have been based on the Cray Slingshot now HPE Slingshot – interconnect, while Fujitsu has its Tofu interconnect in Fugaku, while Atos has BXI.
“When Dell builds a massive system, it uses standard server building blocks and InfiniBand. Maybe that makes it feel less special somehow, like anybody could’ve done it – but Dell is getting picked,” Snell said.
A new supercomputer called Horizon will be housed at TACC around 2026. It’s not clear exactly what role Dell will have, but they have been cited as a project partner. The TACC system is part of NSF’s plan to create a Leadership-Class Computing Facility (LCCF) at TACC.