Last January Thierry Pellegrino, a long-time Dell/Dell EMC veteran, became vice president of HPC. His tenure comes at a time when the very definition of HPC is blurring with AI writ large (data analytics, machine learning, deep learning) clamoring for a place at the previously exclusively HPC table. This blending of HPC with AI has simultaneously stirred excitement and angst in the HPC community, although the hand-wringing by HPC purists has diminished as the power of HPC and AI together becomes apparent.
Dell EMC, of course, is no stranger to HPC though it’s historically focused more on commercial markets. Its win of the $60 million TACC Frontera supercomputer contract got plenty of attention at SC18. The company has also recently stood up an impressive cluster at the Ohio Supercomputer Center (Pitzer) and will soon stand up another cluster at the University of Michigan (Great Lakes). Dell EMC is showcasing leading edge expertise and collaboration with other leading technology suppliers to support its growing pushes into HPC and AI. A good example is the Great Lakes project in which Dell EMC worked with Mellanox to deploy the first system to use Infiniband HDR 200Gb/s.
“I remember when we formed the team, one of the things I said,” recalled Pellegrino, “was, ‘Let’s go and lead. Let’s go through the learning and let’s become good partners.’ Mellanox is a good partner. And you know HDR 200 is a pretty pointed technology, and it took a little bit of back and forth, and Michigan was delighted we were able to get that cluster done.”
In keeping with Dell EMC’s traditional attention to the enterprise, the company is leveraging reference architectures as a means to bring HPC/AI in to the broader enterprise. It launched a Ready Solution for AI in August intended to make AI deployment easier (link to Dell EMC’s SC18 announcements). This approach of using reference architectures is fundamental to Dell EMC’s strategy for taking HPC and AI technology to a broader market.
Said Pellegrino, “It’s a reference architecture that tells you “if you want to start from scratch, good luck, but if you want to start from something that has been validated, it is also that.” When we launched in August the goal is not to have 1,000 customers doing exactly the same thing. The goal is to enable you talk to a customer and work with them to adapt to their needs.”
It will be interesting to monitor how this effort fares; the use of vertical pre-configured HPC solutions and the use of reference architectures as a means to encourage greater HPC use (and sales) into the enterprise have generally yielded mixed results.
So now, two years after the merger of Dell and EMC – a period during which Pellegrino says life was interesting – Dell EMC seems to be sharpening the points of a multi-prong spear to deliver HPC and AI to both sophisticated clients and less skilled organizations. While at SC18 HPCwire and EnterpriseTech had a chance to sit down and talk with Pellegrino. Unfortunately, poor quality of the recorded interview prevents presenting the full discussion. Nevertheless Pellegrino’s comments on at least one topic – the wealth of processor options now available came through clearly. Although Dell EMC is a solid Intel partner, Pellegrino sees growing interest in AMD Epyc and Arm.
Presented here, with apologies to Pellegrino for the briefness of the material, are a few of his thoughts.
HPCwire: I know Dell EMC is a strong Intel partner, and that Frontera will use Intel chips, but we wanted to get your thoughts on the emerging processor diversity. It seems there are more viable processor choices – Intel, AMD Epyc, IBM Power – than in several years.
Pellegrino: Yes there are more choices. [Yet] even though Epyc (AMD) has come out and has been a good processor, we haven’t seen a landslide over from Intel to AMD. We are very excited to see the Intel roadmap continuing to evolve. We have seen the Cascade Lake AP announcement recently and we’ll see what next year brings. The AMD roadmap with Epyc2 having been announced recently is also very interesting. Like everyone we are analyzing these technologies. I will add in the mix Arm.
HPCwire: Focus on Arm for a moment. Given x86’s strength and Intel’s dominance and even AMD’s growing strength, what’s your take on Arm for broader use in HPC and the datacenter?
Pellegrino: So we have all been watching Arm and saying Arm is going to be relevant… tomorrow. That’s kept on going. Finally Arm cleared the 64-bit hurdle, but to me, and for Dell EMC, it’s still challenging. The whole ecosystem has not been built out. There are reasons why Arm could be more relevant in hyperscale and HPC [for] very targeted applications, if you have the stack for that particular application that is well validated for Arm. At any point in time we are looking at all three of those processor vendors, actually Arm is not a processor vendor but a technology provider, and trying to evaluate which one are well suited for our customers.
HPCwire: There are now a few major Arm projects being stood up and the silicon has been available for a while. Does that give you more confidence about Arm’s future?
Pellegrino: The TX2 (Marvel/Cavium ThunderX2, available last May) looks good and the ThunderX3 roadmap looks great but they aren’t the only ones supplying Arm. Fujitsu has an offering. We also see Ampere with an offering coming up. That’s almost one of the challenges, there so many variations of Arm.
HPCwire: Should we be surprised if Dell EMC releases an Arm system? HPE had an early datacenter server (Moonshot cartridge) based on Arm that didn’t sell well.
Pellegrino: First of all, we do have Arm servers today but I think the better way for me to answer your question is to say we are not discarding any technology out there. We have an Arm data center solution, for hyperscale. And it’s true that – just like other OEMs out there – we had a SKU available that was 32-bit and didn’t really sell. But I think we are not one of those OEMs that will go out there and just design it and hope people will go and buy it. We depend upon our customers. I can tell you historically customers have asked questions about Arm but have not been very committal. Those discussions are now intensifying.
HPCwire: Has concern over the so-called heavy lift – porting to and supporting a non-x86 system – been reduced? Is less work needed now?
Pellegrino: It’s still the kind of work it was before in general. When you narrow it down to an HPC environment, I think you can reduce the size of the mountain you have to clear. Now, it’s a matter of needing to understand how much more needs to be provided; the ROI needs to be really valuable. [Remember] the ROI is not affected just by the cost of the processor. You have to factor in the upgrade, the tuning, whatever application you need. But I think there is growing momentum. It’s growing in the right direction.
HPCwire: That sounds promising for Arm.
Pellegrino: I think next year we’ll still see most choices between Intel and AMD. I am just saying Arm is another trend. Look at Qualcomm. We thought they were the guys who were going to lead the pack (of Arm server chip suppliers) – [Qualcomm has since sent mixed signal about serving that market] – That’s a little scary if you think about running a business and usually it’s the kind of business where you need reliability and you need partners who are going to be present. You know Intel is going to be around for a while.
HPCwire: What about IBM? Would you be considering using IBM Power processors?
Pellegrino: You are the second person today to ask me about Power. I think right now we are very busy and focused on Intel, x86, and Arm. It’s not impossible that Power could become more relevant. We are always looking at technologies. The Power-Nvidia integration was a pretty smart move and we’ve seen some clusters won by Power. But it’s not an avalanche. I think it works great for purposeful applications. For general purpose, I think it’s still looked at as [less attractive] than AMD Intel and ARM.
HPCwire: Let’s talk about the rise of heterogeneous architectures. Are you seeing greater demand for accelerated systems?
Pellegrino: We do have more and more customers looking to install heterogeneous and it generally stems from the needs of their own customers. In academia sometimes it’s a regular standard cluster but when they want to try to do deeper analysis, they want accelerators and Nvidia is bringing some GPUs that bring very obvious advantages there. Quite frankly we deploy a lot of heterogeneous clusters. In the enterprise [it’s determined] more by the technology best suited for the workloads. Unless the workload is very, very narrow, more often than not what we see is Xeon deployments with GPUs [but how many varies with specific needs].