HPE Floats HPC-as-a-Service on GreenLake Cloud

By Tiffany Trader

December 9, 2020

Hewlett Packard Enterprise (HPE) today introduced a set of pre-configured HPC services via its HPE GreenLake platform with planned general availability in spring 2021. The new managed service offerings can be deployed on-premises in the customer’s own datacenter or in a colocation facility, bringing HPE’s HPC portfolio into the mainstream enterprise segment.

Responding to the charge that HPC systems are costly, complex and require specialized skillsets to implement and deploy, HPE says it can simplify the experience by speeding up deployment of HPC projects by up to 75 percent and reducing capital expenditures by up to 40 percent (citing a Forrester Consulting study commissioned by HPE).

HPE says that GreenLake customers pay for only what they use. Without providing exact pricing details, General Manager of GreenLake Cloud Services Keith White said HPE will offer an elastic, “true metering” model, based on storage and compute usage scenarios. “It’s along the lines of ‘how many gigabytes am I using?’ or ‘how many cores?’ — those types of scenarios,” he said. “Obviously that becomes more of an operational expense versus the full capital expense, if that’s required.”

The fully managed, pre-bundled services will harness HPE’s software, storage and networking solutions, and will be sold as small, medium and large options. HPE plans to launch the offering globally in the spring with the small and medium sizes, based on HPE Apollo and Proliant servers, and then expand to the large-size offering that will leverage the Cray line and technologies, including Slingshot, its high speed interconnect, and the Cray programming environment.

The GreenLake HPC services will also leverage HPE Performance Cluster Manager and Clusterstor, HPE’s high performance, parallel storage system. Customers will be able to opt for GPU-based machines to meet the specific needs of both HPC and AI workloads.

GreenLake cloud services for HPC includes access to the following:

• HPE GreenLake Central — offers an advanced software platform for customers to manage and optimize their HPC services.
• HPE Self-service dashboard — enables users to run and manage HPC clusters on their own, without disrupting workloads, through a point-and-click function.
• HPE Consumption Analytics — provides at-a-glance analytics of usage and cost based on metering through HPE GreenLake.
• HPC, AI & App Services — standardizes and packages HPC workloads into containers, making it easier to modernize, transfer and access data.

HPC at Any Size

With high-performance computing making strides into enterprise, driven by AI and analytics, HPE is focused on providing the right-size resource with the right level of support, be it a managed service in the cloud, a managed service in an on-premises or colo environment or a more traditional supercomputing installation. With an augmented product portfolio following HPE’s acquisition of Cray last year — and the capabilities of GreenLake and the Pointnext cloud consulting services — HPE has lined up the pieces to back this strategy.

At the high-end, HPE is teed up to launch the exascale era in the United States with the scheduled deployment of Frontier at Oak Ridge National Laboratory in late 2021.

HPE’s broader strategy, however, is to use that same HPC technology to harness and tame the explosion of data that’s happening in every enterprise, large and small, to help them to process that data and unlock insights faster, according to Pete Ungaro, senior vice president and general manager of HPE’s HPC and machine critical solutions group.

“HPE wants to take these technologies that traditionally were at the pinnacle of the HPC market, and bring those down to small sizes and medium customers, hitting small and medium sized businesses or departments within larger corporations that don’t have a lot of traditional HPC expertise, where partners can bring not only a solution to them, but also the services that they can wrap around that to help them to implement,” said Ungaro.

Insights and the Market

Market watcher Addison Snell, CEO of Intersect360 Research, says HPE’s move to offer on-premise as-a-service offerings, including HPC capabilities, lines up with the his firm’s analysis of HPC market trends.

“A growing proportion of HPC spending is moving to ‘cloud-like’ engagements that don’t fit in the standard cloud bucket,” he told HPCwire. “We’re seeing demand for things like SaaS or managed services contracts that have the benefit of utility pricing, but maintaining on-premises advantages such as data locality, data sovereignty, and control.”

Cloud-type cost models that favor OPEX over CAPEX are more attractive for businesses and organizations facing COVID-19 induced economic pressures. “The situation with COVID has really driven a lot of push for customers that are looking for a much more cost effective option to reduce costs, reduce capital outlay, and really maintain that free cash flow,” said White.

Snell pointed to HPE’s position as the leading provider of on-premise HPC systems and their established HPE PointNext services as advantages for serving enterprise needs. “GreenLake HPC-as-a-service deployments leverage these to address this emerging segment that seeks to combine the advantages of on-premise and cloud,” he said.

Citing HPE’s >37 percent share of the high performance computing market, Ungaro said HPE is approaching HPC cloud in a way that is “fundamentally differently” from traditional cloud providers. “We start with our leadership HPC position in the market, and then bring that capability to a cloud infrastructure, rather than starting with a cloud infrastructure and trying to apply that to HPC,” he said.

Growth, HPC use cases, and engagement

GreenLake has seen significant growth, according to White. Since 2017, the business unit has grown from about 350 customers to over 1,000 with the overall contract value experiencing a near tripling in that time to over $4 billion. Notable customer wins include SAP, Kern County in California, Nokia and YF Life Insurance. On the partner side, White highlighted Accenture’s use of HPE GreenLake for its hybrid cloud offering (used by Accenture customers).

While GreenLake HPC-as-a-service won’t be widely available until next spring, HPE has HPC customers using HPE GreenLake today. One of these is Zenseact (formerly known as Annuity), a software developer for autonomous driving solutions based in Sweden and China. Zenseact relies on HPE’s GreenLake HPC services to perform 10,000 simulations per second, using driving data from its test cars in order to improve software design for safer autonomous vehicles.

Other use cases for the HPC-as-a-service offering include financial services, drug discovery and oil and gas exploration.

HPE is working with ISVs, such as ANSYS, to build end to end solutions in different vertical segments, and is collaborating closely with key platform providers such as Activeeon, Core Scientific and Ubercloud, so that customers can deploy on workload-optimized platforms and containerize their applications running on GreenLake cloud services.

HPE says that customers can order and configure GreenLake services via a simple self-service portal and can get up and running in “as little as 14 days.” All GreenLake cloud services, including for HPC, are also available through HPE’s channel partner program, according to the company.

GreenLake versus Azure

HPE’s supercomputing-as-a-service offering with Azure (originated by Cray in 2017) is said to be on track and growing. The arrangement provides the capability for existing Azure customers to order an HPE supercomputer (including Clusterstor storage) that is deployed in the Azure cloud and integrated with Azure services.

Ungaro clarified the differentiation between the HPE Azure setup and GreenLake HPC services:

“We’re seeing increased demand from customers who want that same capability [as provided by the Azure scheme] in their own datacenters, but they don’t want that size, they want a smaller, more bite-sized chunk,” he said.

“GreenLake allows these customers to have that versatility and that flexibility,” said Ungaro, adding “I think what we’re going to see more and more is people that want to deploy some of that core capability on their own in their own datacenters or in a colo, and then at times when they have more increased needs, they may burst out into the public cloud, and may take advantage of some of the the capability in that public cloud for those burst times but then have the core of their applications, the core of their data sets local, so they can get the best latency, the best turnaround times and most control over their environments.”

HPE cited plans to further evolve GreenLake cloud to allow customers to manage across all of these environments. “They’ll be able to manage their on-prem workloads, as well as up into multiple cloud environments, depending on what their needs and what their usage is,” explained Ungaro.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight. Hyperion Research analyst and noted storage expert Mark No Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together any HPC and AI resources and integrate them with networking, Read more…

What’s New in HPC Research: Solar Power, ExaWorks, Optane & More

September 16, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

AWS Solution Channel

Supporting Climate Model Simulations to Accelerate Climate Science

The Amazon Sustainability Data Initiative (ASDI), AWS is donating cloud resources, technical support, and access to scalable infrastructure and fast networking providing high performance computing (HPC) solutions to support simulations of near-term climate using the National Center for Atmospheric Research (NCAR) Community Earth System Model Version 2 (CESM2) and its Whole Atmosphere Community Climate Model (WACCM). Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

Amazon, NCAR, SilverLining Team for Unprecedented Cloud Climate Simulations

September 10, 2021

Earth’s climate is, to put it mildly, not in a good place. In the wake of a damning report from the Intergovernmental Panel on Climate Change (IPCC), scientis Read more…

After Roadblocks and Renewals, EuroHPC Targets a Bigger, Quantum Future

September 9, 2021

The EuroHPC Joint Undertaking (JU) was formalized in 2018, beginning a new era of European supercomputing that began to bear fruit this year with the launch of several of the first EuroHPC systems. The undertaking, however, has not been without its speed bumps, and the Union faces an uphill... Read more…

How Argonne Is Preparing for Exascale in 2022

September 8, 2021

Additional details came to light on Argonne National Laboratory’s preparation for the 2022 Aurora exascale-class supercomputer, during the HPC User Forum, held virtually this week on account of pandemic. Exascale Computing Project director Doug Kothe reviewed some of the 'early exascale hardware' at Argonne, Oak Ridge and NERSC (Perlmutter), while Ti Leggett, Deputy Project Director & Deputy Director... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire