Cray – and the Cray Brand – to Be Positioned at Tip of HPE’s HPC Spear

By Doug Black and Tiffany Trader

May 22, 2019

More so than with most acquisitions of this kind, HPE’s purchase of Cray for $1.3 billion, announced last week, seems to have elements of that overused, often abused term: transparency. Another surprise: HPE apparently isn’t paying the usual lip service when it says Cray will continue as an entity within the HPE umbrella and that the venerable Cray brand will continue to exist.

We say all this for several reasons – for one, industry observers agree that giving Cray semi-autonomy makes sense; for another, senior executives at the two companies have already provided details on how Cray will operate within HPE’s management and business structure.

Here’s what we know: two companies of major importance to HPC and to the evolution of AI have joined forces. We know that HPE paid a somewhat surprising premium for Cray (3X revenues, 20X earnings and a 27 percent premium above the average Cray stock price the previous 30 days), indicating that HPE perceives Cray as an asset with a healthy future life. We also know that the two companies have different – and, the companies said, complementary – HPC and AI product portfolios, with Cray’s market focus on government/high end and HPE’s on the commercial mid-tier.

Regarding the cultural and organizational fit, Cray CEO Pete Ungaro corroborated HPE CEO Antonio Neri’s statements last week that Cray and the Cray brand will remain entities within HPE. Acknowledging it is, “early in the process and many aspects have yet to be determined,” Ungaro stated in a letter to the company’s 1,300 “Crayons” that “They (HPE) see value in the technologies we’ve built (and are building). In fact, they have decided to combine their HPC business into ours and are planning to keep the Cray brand alive in how we market our supercomputing products.”

Further, Ungaro said he will “lead this new organization, to work with their team and make sure we preserve the best of both companies as we bring our two companies together.”

The “combined organization” will exist within HPE’s Hybrid IT business unit, which Ungaro said is the largest segment of HPE and is led by Phil Davis, president of Hybrid IT and HPE’s chief sales officer, who reports to Neri.

Piecing together Neri’s and Ungaro’s statements, it appears that the Cray brand will carry forward HPE’s product and marketing efforts within the high-end HPC – i.e., supercomputing – business.

We know that words and sentiments of this kind are commonly mouthed, with varying degrees of sincerity, by acquirers, and we know why: to avoid disrupting customer and employee relations. But in this case, the fact that Ungaro has made the statements lends them credence. And as mentioned, it makes sense to use the Cray brand as HPE’s marketing spearhead at the zenith of supercomputing-class technology, given Cray’s unique position as a driver of this segment; as a company that historically helped create and define supercomputing as a technology and a market in its own right; and because well more than half of Cray’s business is with government agencies and with supercomputing centers and universities, much of whose funding comes from government and have been Cray customers for decades.

By contrast, HPE, though the market share leader in HPC servers, does most of its business in commercial HPC sectors selling midrange HPC products.

In addition, Cray is executing on contracts to build two exascale-class systems, for Argonne and Oak Ridge national labs, based on Cray’s “Shasta” supercomputing platform and its Slingshot interconnect technology, and it seems unlikely HPE will do anything to disrupt delivery of those systems and realization of their attendant revenues (Neri pointed out last week that exascale computing is a $4.3 billion market opportunity over the next five years).

Breaking down the overall HPC compute market, Neri sizes the supercomputing/exascale sector at between $2.5 to $5 billion, while the sub-supercomputing HPC sector at $8.5 billion.

Also, while it’s not clear what will come after exascale-class computing, which could serve as the high end supercomputing architecture-of-choice for the next decade or more, there is additional opportunity in building systems that attain the next level of supercomputing performance – whatever that may be.

Given the expected, continued health of the zenith-class supercomputing market, preserving the iconic Cray brand makes sense, Hyperion Research SVP Steve Conway told us.

“Our studies show that requirements for the enterprise and hyperscale and cloud markets are pushing up into the HPC competency space, especially the leading edge where Cray lives,” said Conway. “HPC is indispensable today at the forefront of AI and machine learning, and the evolving requirements are beyond any vendor’s capabilities today, so Cray is well positioned to help lead the charge.”

Karl Freund, consulting lead for HPC and deep learning at industry analyst group Moor Insights & Strategy, shared a similar view.

“I think it is likely that HPE provides support and additional sales coverage, while not interfering in the core Cray business,” said Karl Freund, consulting lead for HPC and deep learning at industry analyst group Moor Insights & Strategy. “The main-line true-supercomputer class system based on Shasta is already in a class by itself, and I don’t see that changing much other than possibly enjoying the benefits of scale in parts procurement. The US and European governments need this kind of performance for research, and will continue to buy from HPE/Cray, Dell, and Lenovo.”

Last week, Neri said a factor in favor of the acquisition is putting HPE’s global sales and marketing organization to work selling Cray gear to commercial markets, and Freund agrees this strategy has potential.

“HPE brings a very large commercial enterprise go-to-market that’s truly complementary to Cray (because) they don’t have that reach,” Neri said last week, emphasizing the services opportunity. “So you need to look at this first and foremost as an opportunity for HPE as a margin expansion because we leverage their technologies in through the rest of our portfolio and second is upside of revenue by not just participating in supercomputing but over time in the other subsets of the market, which are technologies that will be available over next several quarters, which today we didn’t have the IP.”

On this point, and referring to the Cray “Frontier” exascale system due to be delivered to Oak Ridge in 2021, Freund said, “The real benefit to HPE will be selling ‘mini-Frontier’ systems to smaller government and enterprise institutions. Unlike the other acquisitions like Nvidia/Mellanox and Xilinx Solarflare, this is a merger of two systems companies, not foundational silicon platforms, so it puts HPE in a unique position in the market; they can and will sell a lot of silicon from all the leading vendors.” He also foresees the convergence of Cray’s CS500 cluster server line and HPE Apollo servers, “making both a stronger player in the enterprise for AI and HPC,” Freund said.

Conway said the nature of the two companies’ product lines and market strategies will result in “Cray to be given a fair amount of autonomy, all the running room needed to realize the potential value of the acquisition by capturing additional exascale and other high-end business. But I would also expect HPE to manage combined company teams to integrate the Cray and HPE product portfolios and roadmaps, and to develop next-generation technologies together. So, there will be a mix of autonomy and integration.”

From a broader perspective, it’s generally agreed that the recent spate of high performance computing consolidations is driven by emerging markets centered on what could be called the “holy quintet” of integrated HPC-big data-AI-5G-IoT solutions. This is spurring the infrastructure vendors to build out a wider range of high performance storage, networking and interconnect capabilities to address the AI-infused compute-everywhere world to come.

It’s this combination of technologies that leads industry watcher Addison Snell, CEO, Intersect360 Research, to speculate on why HPE agreed to pay nearly three times Cray’s annual revenues of $456 million – Snell said the purchase price is about “three times over what I’d expect to see in a stable kind of market. So that leaves two conclusions, one is that everyone at HPE is crazy – I don’t think that’s right. The other is they see some other value here where they can take the Cray tech into some other area.”

The asset – possibly a hidden asset – that HPE may view as a revenue multiplier, Snell said, is the ClusterStor high performance storage line developed by Seagate and taken over nearly two years ago by Cray when it was given rights to brand name and hired Seagate’s ClusterStor engineers.

Noting that HPE cited ClusterStor as a key technology during its briefing last week, Snell said high performance storage “is an area where I have been critical of HPE in the past… It could be what we’re seeing is the potential for HPE to take that ClusterStor line, re-purpose it and sell it a lot more broadly than Cray has been able to do. If HPE could add 50 or 100 percent to its storage footprint in these high performance markets that would quickly start making up the gap in the expected purchase vs the current value of those assets.”

This indirectly relates to the widely noted “lumpiness” of Cray’s high-price/low-volume/long-sales-cycle business model, resulting in revenue ups-and-downs the company must weather between systems deliveries. Neri said HPE’s deeper financial resources (2018 revenue: $31 billion), and it’s focus on the less expensive/higher volume commercial HPC/AI markets smooth out the financial environment in which Cray will operate.

Ungaro acknowledged this in his letter to employees, explaining that the company wants to pursue opportunities to expand its customer base and take advantage that “the massive growth in data and convergence of modeling, simulation, analytics and AI provide us… and the chance to win a few more monster exascale systems around the world.”

However, he said, “With our current operating model, it’s challenging to make the investments we need to fully take advantage of these opportunities. At the same time, our size often inhibits us from getting the same prices on components that our competitors do. Additionally, we face the longer-term threat of the massive cloud vendors. Making the necessary investments while also delivering profitability over time is a challenge and a major risk for our business.”

The acquisition is expected to close within six to nine months; the two companies will remain separate and will work in the market independently until that time.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) in the UK with plans to explore use of Nvidia BlueField DPU Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impact on how large a piece of the DL pie a user can finally enj Read more…

GTC21: Nvidia Launches cuQuantum; Dips a Toe in Quantum Computing

April 13, 2021

Yesterday Nvidia officially dipped a toe into quantum computing with the launch of cuQuantum SDK, a development platform for simulating quantum circuits on GPU-accelerated systems. As Nvidia CEO Jensen Huang emphasized i Read more…

Nvidia Aims Clara Healthcare at Drug Discovery, Imaging via DGX

April 12, 2021

Nvidia Corp. continues to expand its Clara healthcare platform with the addition of computational drug discovery and medical imaging tools based on its DGX A100 platform, related InfiniBand networking and its AGX develop Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU technology conference, held virtually once more due to the ongoing pandemic, the company announced its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. Read more…

AWS Solution Channel

Volkswagen Passenger Cars Uses NICE DCV for High-Performance 3D Remote Visualization

 

Volkswagen Passenger Cars has been one of the world’s largest car manufacturers for over 70 years. The company delivers more than 6 million automobiles to global customers every year, from 50 production locations on five continents. Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fledged partner to CPUs and GPUs in delivering advanced computi Read more…

GTC21: Dell Building Cloud Native Supercomputers at U Cambridge and Durham

April 14, 2021

In conjunction with GTC21, Dell Technologies today announced new supercomputers at universities across DiRAC (Distributed Research utilizing Advanced Computing) Read more…

The Role and Potential of CPUs in Deep Learning

April 14, 2021

Deep learning (DL) applications have unique architectural characteristics and efficiency requirements. Hence, the choice of computing system has a profound impa Read more…

Nvidia Serves Up Its First Arm Datacenter CPU ‘Grace’ During Kitchen Keynote

April 12, 2021

Today at Nvidia’s annual spring GPU technology conference, held virtually once more due to the ongoing pandemic, the company announced its first ever Arm-based CPU, called Grace in honor of the famous American programmer Grace Hopper. Read more…

Nvidia Debuts BlueField-3 – Its Next DPU with Big Plans for an Expanded Role

April 12, 2021

Nvidia today announced its next generation data processing unit (DPU) – BlueField-3 – adding more substance to its evolving concept of the DPU as a full-fle Read more…

Nvidia’s Newly DPU-Enabled SuperPod Is a Multi-Tenant, Cloud-Native Supercomputer

April 12, 2021

At GTC 2021, Nvidia has announced an upgraded iteration of its DGX SuperPods, calling the new offering “the first cloud-native, multi-tenant supercomputer.” Read more…

Tune in to Watch Nvidia’s GTC21 Keynote with Jensen Huang – Recording Now Available

April 12, 2021

Join HPCwire right here on Monday, April 12, at 8:30 am PT to see the Nvidia GTC21 keynote from Nvidia’s CEO, Jensen Huang, livestreamed in its entirety. Hosted by HPCwire, you can click to join the Huang keynote on our livestream to hear Nvidia’s expected news and... Read more…

The US Places Seven Additional Chinese Supercomputing Entities on Blacklist

April 8, 2021

As tensions between the U.S. and China continue to simmer, the U.S. government today added seven Chinese supercomputing entities to an economic blacklist. The U Read more…

Habana’s AI Silicon Comes to San Diego Supercomputer Center

April 8, 2021

Habana Labs, an Intel-owned AI company, has partnered with server maker Supermicro to provide high-performance, high-efficiency AI computing in the form of new Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

CERN Is Betting Big on Exascale

April 1, 2021

The European Organization for Nuclear Research (CERN) involves 23 countries, 15,000 researchers, billions of dollars a year, and the biggest machine in the worl Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

HPE Launches Storage Line Loaded with IBM’s Spectrum Scale File System

April 6, 2021

HPE today launched a new family of storage solutions bundled with IBM’s Spectrum Scale Erasure Code Edition parallel file system (description below) and featu Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

Quantum Computer Start-up IonQ Plans IPO via SPAC

March 8, 2021

IonQ, a Maryland-based quantum computing start-up working with ion trap technology, plans to go public via a Special Purpose Acquisition Company (SPAC) merger a Read more…

Leading Solution Providers

Contributors

Can Deep Learning Replace Numerical Weather Prediction?

March 3, 2021

Numerical weather prediction (NWP) is a mainstay of supercomputing. Some of the first applications of the first supercomputers dealt with climate modeling, and Read more…

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the Read more…

The History of Supercomputing vs. COVID-19

March 9, 2021

The COVID-19 pandemic poses a greater challenge to the high-performance computing community than any before. HPCwire's coverage of the supercomputing response t Read more…

AMD Launches Epyc ‘Milan’ with 19 SKUs for HPC, Enterprise and Hyperscale

March 15, 2021

At a virtual launch event held today (Monday), AMD revealed its third-generation Epyc “Milan” CPU lineup: a set of 19 SKUs -- including the flagship 64-core, 280-watt 7763 part --  aimed at HPC, enterprise and cloud workloads. Notably, the third-gen Epyc Milan chips achieve 19 percent... Read more…

HPE Names Justin Hotard New HPC Chief as Pete Ungaro Departs

March 2, 2021

HPE CEO Antonio Neri announced today (March 2, 2021) the appointment of Justin Hotard as general manager of HPC, mission critical solutions and labs, effective Read more…

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire