On November 1 – not quite three weeks ago – Hewlett Packard Enterprise (HPE) emerged from the Big Split. That’s old news given the yearlong lead-up. Throughout the “separation” process, opinions varied wildly (still do) over HPE’s prospects. Clearly it’s early days, but when IDC rolled out HPC market numbers on Tuesday (see below) HP remained firmly ahead of its closest competitors with a 36.1 percent share of the HPC server market. Dell was number two with 16.9 percent.
HPE reports much of the heavy lifting is done – successful introduction of a new HPC product line (Apollo); formation of strategic HPC alliance with Intel; and reorganization of HPC and big data into a single global business unit – with most of the changes accomplished throughout the year rather than a last minute dash. It hasn’t been painless. In September HP (pre-split) announced plans to cut on the order of 25,000 staff, but the hardest part may be over.
At SC15, instead of a barrage of new products announcements, HPE has been reinforcing the idea that its steady preparation is paying off. “We actually ‘went live’, if you will, on August 1 when all of our internal systems cut over in preparation for November 1,” said Bill Mannel, vice president and general manager of the new HPC and big data global business unit. “I think we had a little customer interruption from a shipping perspective in August because we had to shut down a factory in order to cut over systems but that’s it. By November everything was done.”
Time, of course, will tell how successful the HPE gambit proves. For the moment, HPE seems to have given itself a good shot at success. Like other major HPC systems makers, HPE’s eyes are on the enterprise and its evolving product line spans supercomputing to mid-size and small HPC servers.
The Apollo line, launched roughly 18 months ago, is the HPC mainstay. Top of the line Apollo 8000 (liquid cooled) and 6000 (air cooled) systems have been well received with several significant wins including the Peregrine supercomputer jointly developed with DOE’s National Renewable Energy Laboratory (NREL) based on the 8000. Mid this year, the 2000 and 4000 were added to the line.
“The 2000 is an HPC play that allows enterprises and smaller customer to comfortably move to the type of purpose-built HPC infrastructure that a lot of the bigger players have. Its standard footprint fits in a 19″ rack, it’s air cooled, has drives in the front, and cables in the rear,” said Mannel. “The 4000 is a big data machine. The reference architecture is built around Hadoop and we have object storage from both Scality and Cleversafe.”
Recently, the Moonshot line, which was introduced in 2013 and is generally aimed more at conventional datacenter and cloud applications, was also shifted under Mannel’s responsibility. “Moonshot is aligned alongside the Apollo. I now have a full product line to bring to market,” said Mannel.
In July HP announced the deeper alliance with Intel, which among other things facilitates HPE joint collaboration with Intel and HPE customers to gain early access Intel technology and to create purpose-built platforms. Two key components of the alliance include:
- Closer Collaboration with Intel overall to incorporate Intel Scalable System Framework into the Apollo line and working around specific workloads and datasets and optimizing around those to create purpose built systems industry verticals and other customer workloads.
- Expanded Centers of Excellence (CoE) intended to make it easier for HPE customers to work with ISVs, and HP/Intel engineers to modernize code and optimize the infrastructure for HPC-related workloads. There’s one in Grenoble, France, and now one being built out in Houston. The dedicated infrastructure and expertise available at the CoEs, as well as a broad portfolio of services, can be used on-site or accessed remotely.
Broadly, the idea is to provide tuned and balanced systems that focus on unique customer workloads and application performance. The systems will leverage next-generation Intel Xeon processors, the Intel Xeon Phi product family, Intel Omni-Path interconnect technology and the Intel Enterprise Edition of Lustre. Leveraging the alliance HP has, for example, had the Apollo 2000 with Omnipath infrastructure running specific customer codes since October.
“We now have a technology roadmap and can have a conversation with a customer (NDA required) on what our roadmap is to together,” said Mannel adding HPE has several ongoing collaborations in financial services, oil & gas, and life sciences.
Now that it is on its own, HPE is working to quickly reassure the market with a clear strategy message and notable reference customers and use cases. “One customer is the Pittsburgh Supercomputing Center where we have partnered across the HPE server portfolio with Intel using Omnipath Architectures and have created a unique HPC and big data architecture for PSC,” said Mannel.
Another example is work with the Texas Advanced Computing Center at the University of Texas. “We have an Apollo 8000 there which is being used by NTT working on direct voltage development. Currently the platform is running 380V DC within the rack and the ultimate goal is to be able to feed the 380V DC directly as opposed to using a conversion process which is what we do now,” said Mannel. The system not only provides computing capacity for TAAC and its users but also is a test bed for power technology.
Like Intel, HPE is a “founding” member of the OpenHPC initiative being developed under the Linux Foundation. The notion of “standard” HPC software stack is attractive for many reasons, not least because it would make adoption of HPC easier for the broader enterprise community. Mannel agrees, but adds even though HPE is a founding member the work is still very early.
It does seem the link between Intel and HPE is growing even stronger. Take for example, the National Strategic Computing Initiative (NSCI). “We and Intel recognized its importance and decided to add government as a focus and are looking at collaboration in the area as well.”
NSCI, of course, is attracting lots of attention from the entire HPC community. A draft implementation plan has been crafted but hasn’t been shown publicly. At an NSCI overview during SC15 yesterday, William T. Polk of the Office of Science and Technology Policy said he didn’t think the plan would be presented until early next year, perhaps around February. Details around funding, procurements and process remain unsettled. The draft implementation plan is said to be quite long and will no doubt undergo revision.
Nevertheless, Mannel said “[NSCI representatives] were actually in Houston looking, which is where I am based, and we had them for a full day going through HPE engineering, manufacturing, and our test laboratory.”
Clearly, there are many moving pieces to the HPE story – but that’s really not any different than for most system builders. Change is in the air for everyone with the collision of big data and HPC, the slowing of Moore’s law, increased heterogeneity, the race to exascale, the future of NSCI — and that’s not even half of it – but if one thing is for sure, these are interesting times for HPC.