HP(E) Still Stands Solidly Astride the HPC Server Market

By John Russell

November 20, 2015

On November 1 – not quite three weeks ago – Hewlett Packard Enterprise (HPE) emerged from the Big Split. That’s old news given the yearlong lead-up. Throughout the “separation” process, opinions varied wildly (still do) over HPE’s prospects. Clearly it’s early days, but when IDC rolled out HPC market numbers on Tuesday (see below) HP remained firmly ahead of its closest competitors with a 36.1 percent share of the HPC server market. Dell was number two with 16.9 percent.

HPE reports much of the heavy lifting is done – successful introduction of a new HPC product line (Apollo); formation of strategic HPC alliance with Intel; and reorganization of HPC and big data into a single global business unit – with most of the changes accomplished throughout the year rather than a last minute dash. It hasn’t been painless. In September HP (pre-split) announced plans to cut on the order of 25,000 staff, but the hardest part may be over.

At SC15, instead of a barrage of new products announcements, HPE has been reinforcing the idea that its steady preparation is paying off. “We actually ‘went live’, if you will, on August 1 when all of our internal systems cut over in preparation for November 1,” said Bill Mannel, vice president and general manager of the new HPC and big data global business unit. “I think we had a little customer interruption from a shipping perspective in August because we had to shut down a factory in order to cut over systems but that’s it. By November everything was done.”

Time, of course, will tell how successful the HPE gambit proves. For the moment, HPE seems to have given itself a good shot at success. Like other major HPC systems makers, HPE’s eyes are on the enterprise and its evolving product line spans supercomputing to mid-size and small HPC servers.

Screen Shot 2015-11-18 at 10.22.17 PMThe Apollo line, launched roughly 18 months ago, is the HPC mainstay. Top of the line Apollo 8000 (liquid cooled) and 6000 (air cooled) systems have been well received with several significant wins including the Peregrine supercomputer jointly developed with DOE’s National Renewable Energy Laboratory (NREL) based on the 8000. Mid this year, the 2000 and 4000 were added to the line.

“The 2000 is an HPC play that allows enterprises and smaller customer to comfortably move to the type of purpose-built HPC infrastructure that a lot of the bigger players have. Its standard footprint fits in a 19″ rack, it’s air cooled, has drives in the front, and cables in the rear,” said Mannel. “The 4000 is a big data machine. The reference architecture is built around Hadoop and we have object storage from both Scality and Cleversafe.”

Recently, the Moonshot line, which was introduced in 2013 and is generally aimed more at conventional datacenter and cloud applications, was also shifted under Mannel’s responsibility. “Moonshot is aligned alongside the Apollo. I now have a full product line to bring to market,” said Mannel.

In July HP announced the deeper alliance with Intel, which among other things facilitates HPE joint collaboration with Intel and HPE customers to gain early access Intel technology and to create purpose-built platforms. Two key components of the alliance include:

  • Closer Collaboration with Intel overall to incorporate Intel Scalable System Framework into the Apollo line and working around specific workloads and datasets and optimizing around those to create purpose built systems industry verticals and other customer workloads.
  • Expanded Centers of Excellence (CoE) intended to make it easier for HPE customers to work with ISVs, and HP/Intel engineers to modernize code and optimize the infrastructure for HPC-related workloads. There’s one in Grenoble, France, and now one being built out in Houston. The dedicated infrastructure and expertise available at the CoEs, as well as a broad portfolio of services, can be used on-site or accessed remotely.

Broadly, the idea is to provide tuned and balanced systems that focus on unique customer workloads and application performance. The systems will leverage next-generation Intel Xeon processors, the Intel Xeon Phi product family, Intel Omni-Path interconnect technology and the Intel Enterprise Edition of Lustre. Leveraging the alliance HP has, for example, had the Apollo 2000 with Omnipath infrastructure running specific customer codes since October.

“We now have a technology roadmap and can have a conversation with a customer (NDA required) on what our roadmap is to together,” said Mannel adding HPE has several ongoing collaborations in financial services, oil & gas, and life sciences.

Now that it is on its own, HPE is working to quickly reassure the market with a clear strategy message and notable reference customers and use cases. “One customer is the Pittsburgh Supercomputing Center where we have partnered across the HPE server portfolio with Intel using Omnipath Architectures and have created a unique HPC and big data architecture for PSC,” said Mannel.

Screen Shot 2015-11-18 at 10.23.05 PMAnother example is work with the Texas Advanced Computing Center at the University of Texas. “We have an Apollo 8000 there which is being used by NTT working on direct voltage development. Currently the platform is running 380V DC within the rack and the ultimate goal is to be able to feed the 380V DC directly as opposed to using a conversion process which is what we do now,” said Mannel. The system not only provides computing capacity for TAAC and its users but also is a test bed for power technology.

Like Intel, HPE is a “founding” member of the OpenHPC initiative being developed under the Linux Foundation. The notion of “standard” HPC software stack is attractive for many reasons, not least because it would make adoption of HPC easier for the broader enterprise community. Mannel agrees, but adds even though HPE is a founding member the work is still very early.

It does seem the link between Intel and HPE is growing even stronger. Take for example, the National Strategic Computing Initiative (NSCI). “We and Intel recognized its importance and decided to add government as a focus and are looking at collaboration in the area as well.”

NSCI, of course, is attracting lots of attention from the entire HPC community. A draft implementation plan has been crafted but hasn’t been shown publicly. At an NSCI overview during SC15 yesterday, William T. Polk of the Office of Science and Technology Policy said he didn’t think the plan would be presented until early next year, perhaps around February. Details around funding, procurements and process remain unsettled. The draft implementation plan is said to be quite long and will no doubt undergo revision.

Nevertheless, Mannel said “[NSCI representatives] were actually in Houston looking, which is where I am based, and we had them for a full day going through HPE engineering, manufacturing, and our test laboratory.”

Clearly, there are many moving pieces to the HPE story – but that’s really not any different than for most system builders. Change is in the air for everyone with the collision of big data and HPC, the slowing of Moore’s law, increased heterogeneity, the race to exascale, the future of NSCI — and that’s not even half of it – but if one thing is for sure, these are interesting times for HPC.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

University of Stuttgart Inaugurates ‘Hawk’ Supercomputer

February 20, 2020

This week, the new “Hawk” supercomputer was inaugurated in a ceremony at the High-Performance Computing Center of the University of Stuttgart (HLRS). Officials, scientists and other stakeholders celebrated the new sy Read more…

By Staff report

US to Triple Its Supercomputing Capacity for Weather and Climate with Two New Crays

February 20, 2020

The blizzard of news around the race for weather and climate supercomputing leadership continues. Just three days after the UK announced a £1.2 billion plan to build the world’s largest weather and climate supercomputer, the U.S. National Oceanic and Atmospheric Administration... Read more…

By Oliver Peckham

Indiana University Researchers Use Supercomputing to Model the State’s Largest Watershed

February 20, 2020

With water stressors on the rise, understanding and protecting water supplies is more important than ever. Now, a team of researchers from Indiana University has created a new climate change data portal to help Indianans Read more…

By Staff report

TACC – Supporting Portable, Reproducible, Computational Science with Containers

February 20, 2020

Researchers who use supercomputers for science typically don't limit themselves to one system. They move their projects to whatever resources are available, often using many different systems simultaneously, in their lab Read more…

By Aaron Dubrow

China Researchers Set Distance Record in Quantum Memory Entanglement

February 20, 2020

Efforts to develop the necessary capabilities for building a practical ‘quantum-based’ internet have been ongoing for years. One of the biggest challenges is being able to maintain and manage entanglement of remote q Read more…

By John Russell

AWS Solution Channel

Challenging the barriers to High Performance Computing in the Cloud

Cloud computing helps democratize High Performance Computing by placing powerful computational capabilities in the hands of more researchers, engineers, and organizations who may lack access to sufficient on-premises infrastructure. Read more…

IBM Accelerated Insights

Intelligent HPC – Keeping Hard Work at Bay(es)

Since the dawn of time, humans have looked for ways to make their lives easier. Over the centuries human ingenuity has given us inventions such as the wheel and simple machines – which help greatly with tasks that would otherwise be extremely laborious. Read more…

New Algorithm Allows PCs to Challenge HPC in Weather Forecasting

February 19, 2020

Accurate weather forecasting has, by and large, been situated squarely in the domain of high-performance computing – just this week, the UK announced a nearly $1.6 billion investment in the world’s largest supercompu Read more…

By Oliver Peckham

US to Triple Its Supercomputing Capacity for Weather and Climate with Two New Crays

February 20, 2020

The blizzard of news around the race for weather and climate supercomputing leadership continues. Just three days after the UK announced a £1.2 billion plan to build the world’s largest weather and climate supercomputer, the U.S. National Oceanic and Atmospheric Administration... Read more…

By Oliver Peckham

Japan’s AIST Benchmarks Intel Optane; Cites Benefit for HPC and AI

February 19, 2020

Last April Intel released its Optane Data Center Persistent Memory Module (DCPMM) – byte addressable nonvolatile memory – to increase main memory capacity a Read more…

By John Russell

UK Announces £1.2 Billion Weather and Climate Supercomputer

February 19, 2020

While the planet is heating up, so is the race for global leadership in weather and climate computing. In a bombshell announcement, the UK government revealed p Read more…

By Oliver Peckham

The Massive GPU Cloudburst Experiment Plays a Smaller, More Productive Encore

February 13, 2020

In November, researchers at the San Diego Supercomputer Center (SDSC) and the IceCube Particle Astrophysics Center (WIPAC) set out to break the internet – or Read more…

By Oliver Peckham

Eni to Retake Industry HPC Crown with Launch of HPC5

February 12, 2020

With the launch of its Dell-built HPC5 system, Italian energy company Eni regains its position atop the industrial supercomputing leaderboard. At 52-petaflops p Read more…

By Tiffany Trader

Trump Budget Proposal Again Slashes Science Spending

February 11, 2020

President Donald Trump’s FY2021 U.S. Budget, submitted to Congress this week, again slashes science spending. It’s a $4.8 trillion statement of priorities, Read more…

By John Russell

Policy: Republicans Eye Bigger Science Budgets; NSF Celebrates 70th, Names Idea Machine Winners

February 5, 2020

It’s a busy week for science policy. Yesterday, the National Science Foundation announced winners of its 2026 Idea Machine contest seeking directions for futu Read more…

By John Russell

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

Julia Programming’s Dramatic Rise in HPC and Elsewhere

January 14, 2020

Back in 2012 a paper by four computer scientists including Alan Edelman of MIT introduced Julia, A Fast Dynamic Language for Technical Computing. At the time, t Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

SC19: IBM Changes Its HPC-AI Game Plan

November 25, 2019

It’s probably fair to say IBM is known for big bets. Summit supercomputer – a big win. Red Hat acquisition – looking like a big win. OpenPOWER and Power processors – jury’s out? At SC19, long-time IBMer Dave Turek sketched out a different kind of bet for Big Blue – a small ball strategy, if you’ll forgive the baseball analogy... Read more…

By John Russell

Intel Debuts New GPU – Ponte Vecchio – and Outlines Aspirations for oneAPI

November 17, 2019

Intel today revealed a few more details about its forthcoming Xe line of GPUs – the top SKU is named Ponte Vecchio and will be used in Aurora, the first plann Read more…

By John Russell

IBM Unveils Latest Achievements in AI Hardware

December 13, 2019

“The increased capabilities of contemporary AI models provide unprecedented recognition accuracy, but often at the expense of larger computational and energet Read more…

By Oliver Peckham

SC19: Welcome to Denver

November 17, 2019

A significant swath of the HPC community has come to Denver for SC19, which began today (Sunday) with a rich technical program. As is customary, the ribbon cutt Read more…

By Tiffany Trader

Fujitsu A64FX Supercomputer to Be Deployed at Nagoya University This Summer

February 3, 2020

Japanese tech giant Fujitsu announced today that it will supply Nagoya University Information Technology Center with the first commercial supercomputer powered Read more…

By Tiffany Trader

51,000 Cloud GPUs Converge to Power Neutrino Discovery at the South Pole

November 22, 2019

At the dead center of the South Pole, thousands of sensors spanning a cubic kilometer are buried thousands of meters beneath the ice. The sensors are part of Ic Read more…

By Oliver Peckham

Leading Solution Providers

SC 2019 Virtual Booth Video Tour

AMD
AMD
ASROCK RACK
ASROCK RACK
AWS
AWS
CEJN
CJEN
CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
IBM
IBM
MELLANOX
MELLANOX
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
SIX NINES IT
SIX NINES IT
VERNE GLOBAL
VERNE GLOBAL
WEKAIO
WEKAIO

Jensen Huang’s SC19 – Fast Cars, a Strong Arm, and Aiming for the Cloud(s)

November 20, 2019

We’ve come to expect Nvidia CEO Jensen Huang’s annual SC keynote to contain stunning graphics and lively bravado (with plenty of examples) in support of GPU Read more…

By John Russell

Top500: US Maintains Performance Lead; Arm Tops Green500

November 18, 2019

The 54th Top500, revealed today at SC19, is a familiar list: the U.S. Summit (ORNL) and Sierra (LLNL) machines, offering 148.6 and 94.6 petaflops respectively, Read more…

By Tiffany Trader

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Intel’s New Hyderabad Design Center Targets Exascale Era Technologies

December 3, 2019

Intel's Raja Koduri was in India this week to help launch a new 300,000 square foot design and engineering center in Hyderabad, which will focus on advanced com Read more…

By Tiffany Trader

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

IBM Debuts IC922 Power Server for AI Inferencing and Data Management

January 28, 2020

IBM today launched a Power9-based inference server – the IC922 – that features up to six Nvidia T4 GPUs, PCIe Gen 4 and OpenCAPI connectivity, and can accom Read more…

By John Russell

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This