HPE GreenLake for HPC delivers manufacturing competitiveness as a service

By Max Alt, Distinguished Technologist and Director, Hybrid HPC Hewlett Packard Enterprise

August 30, 2021

High-performance computing (HPC) productivity is directly linked to manufacturing competitiveness today because products are designed using computer aided engineering (CAE) tools that depend on HPC.

When HPC is easy to manage and use – instead of complex and time-consuming – teams can spend more time on valuable design tasks (instead of tuning workloads). The cost of managing HPC is also reduced.

And when HPC performance is faster, teams can create more design iterations, and run more-detailed simulations, in less time. So, they can create products that are safer, and reach the market sooner – or in other words, more competitive.

This is one reason manufacturers want powerful HPC solutions they can consume as a utility-like service – and why HPE has created HPE GreenLake for HPC solutions, featuring the leading-edge performance of AMD EPYC processors.[1]

But it’s not the only reason. Businesses are re-evaluating their compute needs, because data volumes keep getting bigger and converged workloads keep getting more complex. So, we all want compute delivered as a service, to take away the hassle of IT management, scaling infrastructure, technology refreshes, and financial hurdles. This change is happening in lots of industries, including energy, life sciences, and financial services.

HPE GreenLake for HPC solutions provide that as-a-service experience. The HPC infrastructure is fully managed by HPE. Billing is monthly and based on HPC consumption, so there are no up-front payments. And you always have an on-site capacity buffer, so you can scale up when demand spikes or a new project starts. (You can also scale down flexibly.)

How does this approach increase competitive advantage? Let’s look at some of the manufacturing challenges it helps to solve.

HPC challenges faced by manufacturers

Manufacturers can aim to differentiate in a number of ways: competing on product quality, innovation, or cost-efficiency. But today, companies cannot lead in any of these areas without effective use of CAE.

CAE includes a broad range of disciplines, such as:

  • Structural analysis – This includes stress analysis on components and assemblies, which is important in improving safety and quality
  • Fluid analysis – Engineers can use computational fluid dynamics (CFD) to simulate thermal and fluid flows, for example to test aerodynamics
  • Multibody dynamics (MBDs) – Analysis of kinematics (the motion of systems of objects) and calculation of loads
  • Noise, vibration, and harshness (NVH) – simulation and optimization of these effects, for example in cars
  • Multiphysics analysis – A combination of analytic techniques for simulating dynamic, real-world performance of products

The tools that enable these disciplines demand powerful HPC resources. Some of the most widely-used CAE tools in manufacturing are ANSYS® Fluent (for fluid analysis), Altair RADIOSS (structural dynamics), OpenFOAM (open source CFD software), and Ansys® LS-DYNA® (finite element analysis/FEA).

To achieve leading performance and productivity, teams need to be able to:

  • Access modern HPC technologies that provide a performance edge – Faster HPC maximizes the size, detail level, and speed of simulations. Teams can also work with larger datasets. However, traditional budget cycles allow manufacturers to refresh HPC technologies only once every few years.
  • Reduce management burden, so teams can focus on design work – HPC can be complex and costly to configure and manage. Reducing these burdens can help manufacturers work more productively and efficiently.
  • Scale flexibly to support new projects and business growth – With traditional finance models, HPC can only scale up or out. And acquiring new equipment can be costly and slow. Teams need agility to scale quickly and flexibly when needs change, to maximize ROI.

The as-a-service solutions

HPE’s solutions deliver cutting-edge performance as a service to solve these problems.

Modern HPC technologies are more accessible with HPE GreenLake. You can choose from the entire HPE server and storage portfolio, including standard and custom solutions, and servers powered by the latest 3rd-Gen AMD EPYC processors. With HPE GreenLake there are no up-front payments. Costs are aligned to HPC consumption.

Your HPC infrastructure is managed for your by HPE, so teams can finally focus on what they do best. IT is still located where you need it – in your data center or almost any other location. We also help you get the most from HPC workloads by helping you to use containers and other modern approaches.

You have the flexibility to scale up or down whenever you need to. HPE helps you monitor your capacity usage, and provides an on-site hardware buffer that can be switched on – or off – whenever your needs change. You only pay for resources you use, so having an on-site buffer is a great way to scale with agility.

The difference with 3rd-Gen AMD EPYCTM processors 

But how about competing on pure performance terms?

AMD EPYC™ processors deliver industry-leading performance and scalability for CAE workloads. HPE and AMD have a long-standing partnership focused on delivering the next era of computing for HPC. AMD Infinity Architecture is tightly integrated with HPE server architecture, for advanced performance. The power of 3rd-Gen AMD EPYC technologies provides:

  • High-frequency processors enabling significant per-core performance
  • Highest core counts and large memory capacity, fast memory bandwidth and high ratios of cache per core
  • Balanced performance and efficiency, allowing organizations to accelerate and optimize workflows such as CFD, EDA, and others
  • Performance to handle large scientific and engineering datasets that’s ideal for compute-intensive models and analysis techniques

AMD EPYC processors power some of the world’s fastest, most scalable data centers and supercomputers. Together, HPE and AMD can deliver high-performance clusters to power manufacturing workloads of any size, while taking advantage of the technologies afforded by the exascale era.

And today, HPE and AMD are defining and delivering the next era of computing for HPC with tuned and optimized HPC solutions – when, where, and how you need it.

More on the topic: HPC as a Service to Accelerate Transformational Growth business paper

Check out these additional resources:

Learn more about HPE GreenLake for HPC

Learn more about our HPC solutions

High-value CAE solutions for manufacturing whitepaper

AMD EPYC™ Tech Docs and White Papers | AMD

 You can also email your questions and comments to hybridhpcsolutions@hpe.com.

Would you like to apply for our HPE Insiders for HPC client-only community? It is a dedicated invitation-only space to keep you informed, give you your seat at the table, and have some fun along the way with your peers.

Max Alt
Distinguished Technologist and Director, Hybrid HPC
Hewlett Packard Enterprise

Max Alt leads cloud-oriented HPCaaS initiatives including the GreenLake Cloud Services for HPC offering. Prior to joining HPE, Max was SVP AI & HPC Technology at Core Scientific that acquired Atrio in 2020 where Max was the CEO & Founder. Atrio created a leading-edge hybrid cloud platform for HPC orchestration, cluster and container management.

Max has a unique background with almost 30 years of experience in software performance technologies and high performance computing. He is both an entrepreneur and a large-scale enterprise leader. Max founded several tech start-ups in the Bay area and he spent 18 years at Intel in various engineering and leadership roles including developing next generation super-computing technologies. Max’s strongest expertise are in computer and server architectures, cloud technologies, operating systems, compilers and software engineering. Max received his BS in Math and Computer Science at Tel Aviv University and his Master’s degree in Software Engineering from Carnegie Mellon University.


[1] https://www.amd.com/en/campaigns/high-performance-computing & https://www.amd.com/en/campaigns/amd-and-hpe

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Rockport Networks Launches 300 Gbps Switchless Fabric, Reveals 396-Node Deployment at TACC

October 27, 2021

Rockport Networks emerged from stealth this week with the launch of its 300 Gbps switchless networking architecture focused on the needs of the high-performance computing and the advanced-scale AI market. Early customers Read more…

AWS Adds Gaudi-Powered, ML-Optimized EC2 DL1 Instances, Now in GA

October 27, 2021

As machine learning becomes a dominating use case for local and cloud computing, companies are racing to provide solutions specifically optimized and accelerated for AI applications. Now, Amazon Web Services (AWS) is int Read more…

Fireside Chat with LBNL’s Advanced Quantum Testbed Director

October 26, 2021

Last week, Irfan Siddiqi led a “fireside chat” with a few media and analysts to introduce the Department of Energy’s relatively new Advanced Quantum Testbed (AQT), which is based at Lawrence Berkeley National Labor Read more…

Graphcore Introduces Larger-Than-Ever IPU-Based Pods

October 22, 2021

After launching its second-generation intelligence processing units (IPUs) in 2020, four years after emerging from stealth, Graphcore is now boosting its product line with its largest commercially-available IPU-based sys Read more…

Quantum Chemistry Project to Be Among the First on EuroHPC’s LUMI System

October 22, 2021

Finland’s CSC has just installed the first module of LUMI, a 550-peak petaflops system supported by the European Union’s EuroHPC Joint Undertaking. While LUMI -- pictured in the header -- isn’t slated to complete i Read more…

AWS Solution Channel

Royalty-free stock illustration ID: 577238446

Putting bitrates into perspective

Recently, we talked about the advances NICE DCV has made to push pixels from cloud-hosted desktops or applications over the internet even more efficiently than before. Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

Rockport Networks Launches 300 Gbps Switchless Fabric, Reveals 396-Node Deployment at TACC

October 27, 2021

Rockport Networks emerged from stealth this week with the launch of its 300 Gbps switchless networking architecture focused on the needs of the high-performance Read more…

AWS Adds Gaudi-Powered, ML-Optimized EC2 DL1 Instances, Now in GA

October 27, 2021

As machine learning becomes a dominating use case for local and cloud computing, companies are racing to provide solutions specifically optimized and accelerate Read more…

Fireside Chat with LBNL’s Advanced Quantum Testbed Director

October 26, 2021

Last week, Irfan Siddiqi led a “fireside chat” with a few media and analysts to introduce the Department of Energy’s relatively new Advanced Quantum Testb Read more…

Killer Instinct: AMD’s Multi-Chip MI200 GPU Readies for a Major Global Debut

October 21, 2021

AMD’s next-generation supercomputer GPU is on its way – and by all appearances, it’s about to make a name for itself. The AMD Radeon Instinct MI200 GPU (a successor to the MI100) will, over the next year, begin to power three massive systems on three continents: the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

LLNL Prepares the Water and Power Infrastructure for El Capitan

October 21, 2021

When it’s (ostensibly) ready in early 2023, El Capitan is expected to deliver in excess of two exaflops of peak computing power – around four times the powe Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Quantum Workforce – NSTC Report Highlights Need for International Talent

October 13, 2021

Attracting and training the needed quantum workforce to fuel the ongoing quantum information sciences (QIS) revolution is a hot topic these days. Last week, the U.S. National Science and Technology Council issued a report – The Role of International Talent in Quantum Information Science... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

US Closes in on Exascale: Frontier Installation Is Underway

September 29, 2021

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, held by Zoom this week (Sept. 29-30), it was revealed that the Frontier supercomputer is currently being installed at Oak Ridge National Laboratory in Oak Ridge, Tenn. The staff at the Oak Ridge Leadership... Read more…

Intel Reorgs HPC Group, Creates Two ‘Super Compute’ Groups

October 15, 2021

Following on changes made in June that moved Intel’s HPC unit out of the Data Platform Group and into the newly created Accelerated Computing Systems and Graphics (AXG) business unit, led by Raja Koduri, Intel is making further updates to the HPC group and announcing... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make i Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

The Latest MLPerf Inference Results: Nvidia GPUs Hold Sway but Here Come CPUs and Intel

September 22, 2021

The latest round of MLPerf inference benchmark (v 1.1) results was released today and Nvidia again dominated, sweeping the top spots in the closed (apples-to-ap Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

D-Wave Embraces Gate-Based Quantum Computing; Charts Path Forward

October 21, 2021

Earlier this month D-Wave Systems, the quantum computing pioneer that has long championed quantum annealing-based quantum computing (and sometimes taken heat fo Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire