HP, Intel Partner to Expand HPC Into the Enterprise

By Tiffany Trader

July 13, 2015

When the going gets tough, the tough join together to innovate. This is the message we are hearing from HPC stakeholders across the government and vendor landscape. It would an oversimplification to say that big data is responsible for driving this deeper partner integration but the technology trend has something to with it. More precisely, the challenge is coming from the juxtaposition of the Moore’s law slow down, the increased use of HPC for business needs and the shift to more data-intensive workloads.

In a move that was motivated by all of these factors, HP and longtime partner Intel are taking their strategic alliance to the next level. HP explains that the joint High Performance Computing (HPC) alliance was “created to advance customer innovation and help expand accessibility of HPC to enterprises of all sizes.”

The announcement can be distilled down to two parts, reflecting a development level and a go-to-market level:

  • Customized industry-specific solutions for HP Apollo systems, integrated with Intel’s Scalable System Framework.
  • Expanded Centers of Excellence to facilitate customer access to HPC.

The proliferation of HPC means that it is no longer relegated to academia, government and a handful of enterprise verticals (you know the ones). Conceive for a moment the size of the traditional HPC space and contrast that with the addressable market space that is enterprise. As Intersect360 has tracked, enterprise HPC now makes up slightly more than one-quarter of the total HPC market, and it is growing more quickly than traditional HPC. HPC vendors are dialed in to this growth and potential, which is further bolstered by big data applications driving need for HPC solutions.

So it makes sense that a major aim of the HP-Intel alliance is expanding into the enterprise. To support that positioning, HP is moving from a product and features perspective to a solutions perspective with an emphasis on financial services, life science, and oil and gas. The partners will work closely on customized industry-specific solutions for HP Apollo systems based on Intel’s Scalable System Framework.

HP saw in Intel a partner that would bring expertise as well as market knowledge to support their effort to build solutions into customers.

“Intel has announced a lot of IP in HPC-specific products – e.g., Phi, Omni-Path, Intel Enterprise Edition Lustre, NVRAM capability, and SSDs – and we will be working to integrate these elements into our Apollo server line, which is optimized for high-performance computing and big data,” said Bill Mannel, vice president and general manager, HPC and Big Data, HP Servers, in an interview with HPCwire.

HP and Intel are enhancing capabilities at the HPC Center of Excellence in Grenoble, France, which was established two years ago to facilitate HP relationships in the European market. Further, HP is opening a second Center of Excellence in Houston, Texas, to better support the North American market. Both centers provide customers with access to best-of-breed Intel and HP technology, industry-optimized HPC solutions from HP and the opportunity to work with ISVs and HP/Intel engineers to modernize code and optimize infrastructure for HPC-related workloads.

“As data explodes in volume, velocity and variety, and the processing requirements to address business challenges become more sophisticated, the line between traditional and high performance computing is blurring,” said Bill Mannel, vice president and general manager, HPC and Big Data, HP Servers. “With this alliance, we are giving customers access to the technologies and solutions as well as the intellectual property, portfolio services and engineering support needed to evolve their compute infrastructure to capitalize on a data driven environment.”

Mannel draws a line between the tighter partnership and the proliferation of computing technologies and the fact that the standard x86 architecture is challenged with providing generation over generation performance improvements for many applications. “This gets people looking at other technologies, accelerators, coprocessors, FPGAs, creating this body of interested people that want to engage these new technologies but to get to them is requiring in some cases a new way of developing code, code modernization aimed at extracting parallelism,” he commented.

“When x86 does not satisfy user needs, they are having to heavily engage in other ways of exploiting performance,” Mannel continued. “As HPC becomes a method for getting value from big data, there are a greater number of customers not familiar with HPC and related techniques from an administrative and programming standpoint. As customers embrace their big data they are finding challenge in how to get value out of it, and HPC is one of the ways they can do that.”

Behind the solution-oriented focus and the Centers of Excellence is the notion that along with performance, access is becoming increasingly important in this era of democratized HPC. Access to HP means allowing customers to consume HPC in the way and time that they wish to. Mannel referenced HP’s Performance Optimized Datacenter (POD) as providing this capability. Under this arrangement, HP sets up PODs near the customer site and the customer essentially pays by the sip.

AIntel Scalable System Framework ISC 2015 slide croppedt a pre-ISC press conference held last Friday, Intel’s Charlie Wuischpard referred to the Scalable System Framework as “the organizing principle for Intel and its OEM partners.” With this underlying blueprint – the compute, the fabric, the memory/storage and the software – Intel has created a template for building solutions across a range of scales.

Wuischpard, vice president of Intel’s data center group and general manager, said the tighter collaboration began as a conversation between the respective company CEOs on the cusp of HP’s restructuring plans to split into two separate companies. As HPC has grown from its roots in academic and research circles, HP sees opportunities to expand its presence in both traditional and newer enterprise and big data-oriented markets.

“They want to put their dollars where we want to put our dollars,” said Wuischpard. “We see HP as a big scale partner. The size of their field organization and channel reach is greater than ours in this part of the industry.”

“There’s actually a multi-phase, three-generation at least mapped out through 2020,” Wuischpard continued, “starting from [alignment within] our current road maps with further intersections taking place as R&D drives greater levels of differentiation and benefit.”

This slide from HP and Intel depicts this layered approach:

HP Intel HPC Alliance ISC 2015 slide

Intel expects that the framework will enable not just HP, but all of its OEM partners, to provide differentiation within their respective markets and customers. Intel has a similar partnership in place with Cray, which in April announced that it is basing its future Shasta architecture on the Intel framework.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together any HPC and AI resources and integrate them with networking, Read more…

What’s New in HPC Research: Solar Power, ExaWorks, Optane & More

September 16, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

AWS Solution Channel

Supporting Climate Model Simulations to Accelerate Climate Science

The Amazon Sustainability Data Initiative (ASDI), AWS is donating cloud resources, technical support, and access to scalable infrastructure and fast networking providing high performance computing (HPC) solutions to support simulations of near-term climate using the National Center for Atmospheric Research (NCAR) Community Earth System Model Version 2 (CESM2) and its Whole Atmosphere Community Climate Model (WACCM). Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit technologies), the quantum computing landscape is transforming Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

Amazon, NCAR, SilverLining Team for Unprecedented Cloud Climate Simulations

September 10, 2021

Earth’s climate is, to put it mildly, not in a good place. In the wake of a damning report from the Intergovernmental Panel on Climate Change (IPCC), scientis Read more…

After Roadblocks and Renewals, EuroHPC Targets a Bigger, Quantum Future

September 9, 2021

The EuroHPC Joint Undertaking (JU) was formalized in 2018, beginning a new era of European supercomputing that began to bear fruit this year with the launch of several of the first EuroHPC systems. The undertaking, however, has not been without its speed bumps, and the Union faces an uphill... Read more…

How Argonne Is Preparing for Exascale in 2022

September 8, 2021

Additional details came to light on Argonne National Laboratory’s preparation for the 2022 Aurora exascale-class supercomputer, during the HPC User Forum, held virtually this week on account of pandemic. Exascale Computing Project director Doug Kothe reviewed some of the 'early exascale hardware' at Argonne, Oak Ridge and NERSC (Perlmutter), while Ti Leggett, Deputy Project Director & Deputy Director... Read more…

IBM Introduces its First Power10-based Server, the Power E1080; Targets Hybrid Cloud

September 8, 2021

IBM today introduced the Power E1080 server, its first system powered by a Power10 IBM microprocessor. The new system reinforces IBM’s emphasis on hybrid cloud markets and the new chip beefs up its inference capabilities. IBM – like other CPU makers – is hoping to make inferencing a core capability... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Iran Gains HPC Capabilities with Launch of ‘Simorgh’ Supercomputer

May 18, 2021

Iran is said to be developing domestic supercomputing technology to advance the processing of scientific, economic, political and military data, and to strengthen the nation’s position in the age of AI and big data. On Sunday, Iran unveiled the Simorgh supercomputer, which will deliver.... Read more…

Leading Solution Providers

Contributors

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire