HP(E) Still Stands Solidly Astride the HPC Server Market

By John Russell

November 20, 2015

On November 1 – not quite three weeks ago – Hewlett Packard Enterprise (HPE) emerged from the Big Split. That’s old news given the yearlong lead-up. Throughout the “separation” process, opinions varied wildly (still do) over HPE’s prospects. Clearly it’s early days, but when IDC rolled out HPC market numbers on Tuesday (see below) HP remained firmly ahead of its closest competitors with a 36.1 percent share of the HPC server market. Dell was number two with 16.9 percent.

HPE reports much of the heavy lifting is done – successful introduction of a new HPC product line (Apollo); formation of strategic HPC alliance with Intel; and reorganization of HPC and big data into a single global business unit – with most of the changes accomplished throughout the year rather than a last minute dash. It hasn’t been painless. In September HP (pre-split) announced plans to cut on the order of 25,000 staff, but the hardest part may be over.

At SC15, instead of a barrage of new products announcements, HPE has been reinforcing the idea that its steady preparation is paying off. “We actually ‘went live’, if you will, on August 1 when all of our internal systems cut over in preparation for November 1,” said Bill Mannel, vice president and general manager of the new HPC and big data global business unit. “I think we had a little customer interruption from a shipping perspective in August because we had to shut down a factory in order to cut over systems but that’s it. By November everything was done.”

Time, of course, will tell how successful the HPE gambit proves. For the moment, HPE seems to have given itself a good shot at success. Like other major HPC systems makers, HPE’s eyes are on the enterprise and its evolving product line spans supercomputing to mid-size and small HPC servers.

Screen Shot 2015-11-18 at 10.22.17 PMThe Apollo line, launched roughly 18 months ago, is the HPC mainstay. Top of the line Apollo 8000 (liquid cooled) and 6000 (air cooled) systems have been well received with several significant wins including the Peregrine supercomputer jointly developed with DOE’s National Renewable Energy Laboratory (NREL) based on the 8000. Mid this year, the 2000 and 4000 were added to the line.

“The 2000 is an HPC play that allows enterprises and smaller customer to comfortably move to the type of purpose-built HPC infrastructure that a lot of the bigger players have. Its standard footprint fits in a 19″ rack, it’s air cooled, has drives in the front, and cables in the rear,” said Mannel. “The 4000 is a big data machine. The reference architecture is built around Hadoop and we have object storage from both Scality and Cleversafe.”

Recently, the Moonshot line, which was introduced in 2013 and is generally aimed more at conventional datacenter and cloud applications, was also shifted under Mannel’s responsibility. “Moonshot is aligned alongside the Apollo. I now have a full product line to bring to market,” said Mannel.

In July HP announced the deeper alliance with Intel, which among other things facilitates HPE joint collaboration with Intel and HPE customers to gain early access Intel technology and to create purpose-built platforms. Two key components of the alliance include:

  • Closer Collaboration with Intel overall to incorporate Intel Scalable System Framework into the Apollo line and working around specific workloads and datasets and optimizing around those to create purpose built systems industry verticals and other customer workloads.
  • Expanded Centers of Excellence (CoE) intended to make it easier for HPE customers to work with ISVs, and HP/Intel engineers to modernize code and optimize the infrastructure for HPC-related workloads. There’s one in Grenoble, France, and now one being built out in Houston. The dedicated infrastructure and expertise available at the CoEs, as well as a broad portfolio of services, can be used on-site or accessed remotely.

Broadly, the idea is to provide tuned and balanced systems that focus on unique customer workloads and application performance. The systems will leverage next-generation Intel Xeon processors, the Intel Xeon Phi product family, Intel Omni-Path interconnect technology and the Intel Enterprise Edition of Lustre. Leveraging the alliance HP has, for example, had the Apollo 2000 with Omnipath infrastructure running specific customer codes since October.

“We now have a technology roadmap and can have a conversation with a customer (NDA required) on what our roadmap is to together,” said Mannel adding HPE has several ongoing collaborations in financial services, oil & gas, and life sciences.

Now that it is on its own, HPE is working to quickly reassure the market with a clear strategy message and notable reference customers and use cases. “One customer is the Pittsburgh Supercomputing Center where we have partnered across the HPE server portfolio with Intel using Omnipath Architectures and have created a unique HPC and big data architecture for PSC,” said Mannel.

Screen Shot 2015-11-18 at 10.23.05 PMAnother example is work with the Texas Advanced Computing Center at the University of Texas. “We have an Apollo 8000 there which is being used by NTT working on direct voltage development. Currently the platform is running 380V DC within the rack and the ultimate goal is to be able to feed the 380V DC directly as opposed to using a conversion process which is what we do now,” said Mannel. The system not only provides computing capacity for TAAC and its users but also is a test bed for power technology.

Like Intel, HPE is a “founding” member of the OpenHPC initiative being developed under the Linux Foundation. The notion of “standard” HPC software stack is attractive for many reasons, not least because it would make adoption of HPC easier for the broader enterprise community. Mannel agrees, but adds even though HPE is a founding member the work is still very early.

It does seem the link between Intel and HPE is growing even stronger. Take for example, the National Strategic Computing Initiative (NSCI). “We and Intel recognized its importance and decided to add government as a focus and are looking at collaboration in the area as well.”

NSCI, of course, is attracting lots of attention from the entire HPC community. A draft implementation plan has been crafted but hasn’t been shown publicly. At an NSCI overview during SC15 yesterday, William T. Polk of the Office of Science and Technology Policy said he didn’t think the plan would be presented until early next year, perhaps around February. Details around funding, procurements and process remain unsettled. The draft implementation plan is said to be quite long and will no doubt undergo revision.

Nevertheless, Mannel said “[NSCI representatives] were actually in Houston looking, which is where I am based, and we had them for a full day going through HPE engineering, manufacturing, and our test laboratory.”

Clearly, there are many moving pieces to the HPE story – but that’s really not any different than for most system builders. Change is in the air for everyone with the collision of big data and HPC, the slowing of Moore’s law, increased heterogeneity, the race to exascale, the future of NSCI — and that’s not even half of it – but if one thing is for sure, these are interesting times for HPC.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

African Supercomputing Center Inaugurates ‘Toubkal,’ Most Powerful Supercomputer on the Continent

February 25, 2021

Historically, Africa hasn’t exactly been synonymous with supercomputing. There are only a handful of supercomputers on the continent, with few ranking on the global stage. Now, the Mohammed VI Polytechnic University (U Read more…

By Oliver Peckham

Supercomputer-Powered Machine Learning Supports Fusion Energy Reactor Design

February 25, 2021

Energy researchers have been reaching for the stars for decades in their attempt to artificially recreate a stable fusion energy reactor. If successful, such a reactor would revolutionize the world’s energy supply over Read more…

By Oliver Peckham

Japan to Debut Integrated Fujitsu HPC/AI Supercomputer This Spring

February 25, 2021

The integrated Fujitsu HPC/AI Supercomputer, Wisteria, is coming to Japan this spring. The University of Tokyo is preparing to deploy a heterogeneous computing system, called "Wisteria/BDEC-01," that will tackle simulati Read more…

By Tiffany Trader

President Biden Signs Executive Order to Review Chip, Other Supply Chains

February 24, 2021

U.S. President Biden signed an executive order late today calling for a 100-day review of key supply chains including semiconductors, large capacity batteries, pharmaceuticals, and rare-earth elements. The scarcity of ch Read more…

By John Russell

Xilinx Launches Alveo SN1000 SmartNIC

February 24, 2021

FPGA vendor Xilinx has debuted its latest SmartNIC model, the Alveo SN1000, with integrated “composability” features that allow enterprise users to add their own custom networking functions to supplement its built-in networking. By providing deep flexibility... Read more…

By Todd R. Weiss

AWS Solution Channel

Introducing AWS HPC Tech Shorts

Amazon Web Services (AWS) is excited to announce a new videos series focused on running HPC workloads on AWS. This new video series will cover HPC workloads from genomics, computational chemistry, to computational fluid dynamics (CFD) and more. Read more…

ASF Keynotes Showcase How HPC and Big Data Have Pervaded the Pandemic

February 24, 2021

Last Thursday, a range of experts joined the Advanced Scale Forum (ASF) in a rapid-fire roundtable to discuss how advanced technologies have transformed the way humanity responded to the COVID-19 pandemic in indelible ways. The roundtable, held near the one-year mark of the first... Read more…

By Oliver Peckham

Japan to Debut Integrated Fujitsu HPC/AI Supercomputer This Spring

February 25, 2021

The integrated Fujitsu HPC/AI Supercomputer, Wisteria, is coming to Japan this spring. The University of Tokyo is preparing to deploy a heterogeneous computing Read more…

By Tiffany Trader

Xilinx Launches Alveo SN1000 SmartNIC

February 24, 2021

FPGA vendor Xilinx has debuted its latest SmartNIC model, the Alveo SN1000, with integrated “composability” features that allow enterprise users to add their own custom networking functions to supplement its built-in networking. By providing deep flexibility... Read more…

By Todd R. Weiss

ASF Keynotes Showcase How HPC and Big Data Have Pervaded the Pandemic

February 24, 2021

Last Thursday, a range of experts joined the Advanced Scale Forum (ASF) in a rapid-fire roundtable to discuss how advanced technologies have transformed the way humanity responded to the COVID-19 pandemic in indelible ways. The roundtable, held near the one-year mark of the first... Read more…

By Oliver Peckham

IBM’s Prototype Low-Power 7nm AI Chip Offers ‘Precision Scaling’

February 23, 2021

IBM has released details of a prototype AI chip geared toward low-precision training and inference across different AI model types while retaining model quality within AI applications. In a paper delivered during this year’s International Solid-State Circuits Virtual Conference, IBM... Read more…

By George Leopold

IBM Continues Mainstreaming Power Systems and Integrating Red Hat in Pivot to Cloud

February 23, 2021

As IBM continues its massive pivot to the cloud, its Power-microprocessor-based products are being mainstreamed and realigned with the corporate-wide strategy. Read more…

By John Russell

Livermore’s El Capitan Supercomputer to Debut HPE ‘Rabbit’ Near Node Local Storage

February 18, 2021

A near node local storage innovation called Rabbit factored heavily into Lawrence Livermore National Laboratory’s decision to select Cray’s proposal for its CORAL-2 machine, the lab’s first exascale-class supercomputer, El Capitan. Details of this new storage technology were revealed... Read more…

By Tiffany Trader

ENIAC at 75: Celebrating the World’s First Supercomputer

February 15, 2021

With little fanfare, today’s computer revolution was arguably born and announced through a small, innocuous, two-column story at the bottom of the front page of The New York Times on Feb. 15, 1946. In that story and others, the previously classified project, ENIAC... Read more…

By Todd R. Weiss

Microsoft, HPE Bringing AI, Edge, Cloud to Earth Orbit in Preparation for Mars Missions

February 12, 2021

The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experi Read more…

By Todd R. Weiss

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

By John Russell

Esperanto Unveils ML Chip with Nearly 1,100 RISC-V Cores

December 8, 2020

At the RISC-V Summit today, Art Swift, CEO of Esperanto Technologies, announced a new, RISC-V based chip aimed at machine learning and containing nearly 1,100 low-power cores based on the open-source RISC-V architecture. Esperanto Technologies, headquartered in... Read more…

By Oliver Peckham

Azure Scaled to Record 86,400 Cores for Molecular Dynamics

November 20, 2020

A new record for HPC scaling on the public cloud has been achieved on Microsoft Azure. Led by Dr. Jer-Ming Chia, the cloud provider partnered with the Beckman I Read more…

By Oliver Peckham

NICS Unleashes ‘Kraken’ Supercomputer

April 4, 2008

A Cray XT4 supercomputer, dubbed Kraken, is scheduled to come online in mid-summer at the National Institute for Computational Sciences (NICS). The soon-to-be petascale system, and the resulting NICS organization, are the result of an NSF Track II award of $65 million to the University of Tennessee and its partners to provide next-generation supercomputing for the nation's science community. Read more…

Programming the Soon-to-Be World’s Fastest Supercomputer, Frontier

January 5, 2021

What’s it like designing an app for the world’s fastest supercomputer, set to come online in the United States in 2021? The University of Delaware’s Sunita Chandrasekaran is leading an elite international team in just that task. Chandrasekaran, assistant professor of computer and information sciences, recently was named... Read more…

By Tracey Bryant

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Top500: Fugaku Keeps Crown, Nvidia’s Selene Climbs to #5

November 16, 2020

With the publication of the 56th Top500 list today from SC20's virtual proceedings, Japan's Fugaku supercomputer – now fully deployed – notches another win, Read more…

By Tiffany Trader

Gordon Bell Special Prize Goes to Massive SARS-CoV-2 Simulations

November 19, 2020

2020 has proven a harrowing year – but it has produced remarkable heroes. To that end, this year, the Association for Computing Machinery (ACM) introduced the Read more…

By Oliver Peckham

Leading Solution Providers

Contributors

Texas A&M Announces Flagship ‘Grace’ Supercomputer

November 9, 2020

Texas A&M University has announced its next flagship system: Grace. The new supercomputer, named for legendary programming pioneer Grace Hopper, is replacing the Ada system (itself named for mathematician Ada Lovelace) as the primary workhorse for Texas A&M’s High Performance Research Computing (HPRC). Read more…

By Oliver Peckham

At Oak Ridge, ‘End of Life’ Sometimes Isn’t

October 31, 2020

Sometimes, the old dog actually does go live on a farm. HPC systems are often cursed with short lifespans, as they are continually supplanted by the latest and Read more…

By Oliver Peckham

Saudi Aramco Unveils Dammam 7, Its New Top Ten Supercomputer

January 21, 2021

By revenue, oil and gas giant Saudi Aramco is one of the largest companies in the world, and it has historically employed commensurate amounts of supercomputing Read more…

By Oliver Peckham

Intel Xe-HP GPU Deployed for Aurora Exascale Development

November 17, 2020

At SC20, Intel announced that it is making its Xe-HP high performance discrete GPUs available to early access developers. Notably, the new chips have been deplo Read more…

By Tiffany Trader

Intel Teases Ice Lake-SP, Shows Competitive Benchmarking

November 17, 2020

At SC20 this week, Intel teased its forthcoming third-generation Xeon "Ice Lake-SP" server processor, claiming competitive benchmarking results against AMD's second-generation Epyc "Rome" processor. Ice Lake-SP, Intel's first server processor with 10nm technology... Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

It’s Fugaku vs. COVID-19: How the World’s Top Supercomputer Is Shaping Our New Normal

November 9, 2020

Fugaku is currently the most powerful publicly ranked supercomputer in the world – but we weren’t supposed to have it yet. The supercomputer, situated at Japan’s Riken scientific research institute, was scheduled to come online in 2021. When the pandemic struck... Read more…

By Oliver Peckham

MIT Makes a Big Breakthrough in Nonsilicon Transistors

December 10, 2020

What if Silicon Valley moved beyond silicon? In the 80’s, Seymour Cray was asking the same question, delivering at Supercomputing 1988 a talk titled “What’s All This About Gallium Arsenide?” The supercomputing legend intended to make gallium arsenide (GaA) the material of the future... Read more…

By Oliver Peckham

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire