HPE Unveils New HPC Solutions

June 20, 2016

PALO ALTO, Calif., June 20 — Today, Hewlett Packard Enterprise (NYSE: HPE) announced new high-performance computing (HPC) solutions, including a comprehensive software-defined platform, enhancements to its Apollo servers, as well as, a new ANSYS computer-aided engineering (CAE) software based solution designed to help manufacturing organizations optimize their design simulation deployments.

Once the domain of academics and research institutions, HPC is rapidly making its way into industries like energy, life sciences, financial services and manufacturing. While organizations in these industries recognize the technology’s strategic importance, perceived complexity and diversity of HPC environments may appear to outweigh the business benefits. Today’s HPE announcement dramatically simplifies the deployment and management of HPC solutions, ensuring companies of all sizes can now accelerate their HPC projects and create competitive differentiation for their business.

“As the global HPC market leader with 35.9 percent market share HPE is upping the ante with new additions to its already-large HPE Apollo portfolio of purpose-built solutions,” said Steve Conway, Research Vice President in IDC’s High Performance Computing group. “These innovative solutions aim to accelerate HPC adoption by organizations of all sizes and segments by enabling faster time to value and increased competitive differentiation through better parallel processing performance, along with reduced complexity and deployment time.”

New Software-Defined Platform for HPC

To simplify and accelerate the configuration, deployment and management of clusters for HPC, HPE is introducing a highly flexible, simple and comprehensive software-defined platform for HPC using the new HPE Core HPC Software Stack with HPE Insight Cluster Management Utility v8.0.

Designed to meet the needs of server cluster environments that may need to scale to thousands of compute nodes, the HPE Core HPC Software Stack is a pre-integrated, pre-tested single software suite that combines open source-based application development tools, libraries, and compilers with HPE cluster management capabilities including HPE iLO and simple cluster set up tools. This suite enables developers, IT administrators, engineers and researchers to quickly and easily develop, test, deploy and manage their HPC environments.

“As a leading research institution focused on applied research in the life sciences industry, we are under continuous pressure to optimize our HPC operations to meet the demanding needs of our users,” said Peter Longreen, COO National Life Science Supercomputer, Technical University of Denmark, Deputy head ELIXIR Denmark. “HPE Insight CMU has provided us with comprehensive functionality to effectively manage our large cluster environment with rapid bare metal provisioning, simple monitoring with remote management and easy integration with a wide variety of cluster components.”

HPC Systems Design Innovation (Apollo 2000 and Apollo 6000) 

Complementing the new HPE Core HPC software suite to accelerate the performance of customers’ HPC workloads and reduce the complexity of their infrastructure, HPE is introducing systems innovations that build on the latest technologies from the HPE Apollo technology partner ecosystem. These new system capabilities are designed to be managed by the HPE Core HPC Software Stack.

HPE is further enhancing the software defined Apollo HPC platform by introducing systems design innovations for the Apollo 6000 system with new HPE ProLiant XL260a server trays based on the next generation of the Intel Xeon Phi processor family and the Intel Omni-Path Architecture (Intel OPA) to reduce latency, and increase bandwidth and performance. HPE is also enhancing the Apollo 2000 system and ProLiant DL server platforms with Intel OPA fabric options. By combining these latest advancements in Intel Scalable System Framework with the scalability, flexibility and manageability of the HPE Apollo portfolio, customers will gain new levels of performance, efficiency and reliability. In addition, customers will be able to run HPC applications in a massively parallel manner with minimal code modification.

Bridges uniquely combines HPE’s purpose-built compute platforms optimized for our compute and data-intensive HPC environment with the Intel Omni-Path interconnect fabric designed to minimize latency,” said, Nick Nystrom, Principal Investigator for Bridges at the Pittsburgh Supercomputing Center (PSC). “The tight partnership and broad integration of technologies between Intel and HPE has been instrumental in enabling PSC to design Bridges and to place it into operation with a uniquely integrated infrastructure that delivers unmatched performance and flexibility.”

“The rapid expansion of data sets, precision analytics, and machine learning, are driving our customers to take a fresh look at their infrastructure requirements,” said Charles Wuischpard, vice president Data Center Group, general manager High Performance Computing Platform Group, Intel. “Our Alliance with HPE leverages the Intel Scalable System Framework, featuring the bootable host Intel Xeon Phi processor and the powerful Intel Omni-Path Architecture to deliver new levels of scalability, power efficiency and efficient throughput to support complex, highly parallel workloads.”

To provide customers with additional choice and flexibility, HPE is also enhancing HPE Apollo 6000 and HPE Apollo 2000 systems with next-generation EDR 100Gb/s InfiniBand solutions from Mellanox Technologies. Ideal for HPC clusters that require low latency and high bandwidth networking, Mellanox EDR fabric technology provides customers the network performance needed to improve response times and alleviate bottlenecks that impact application performance.

Integrated Manufacturing Solution Capabilities for ANSYS 

The widespread adoption of CAE software with advanced simulation capabilities has helped manufacturers design new offerings more efficiently, reduce development cycle times and increase competitive differentiation. However, many manufacturers struggle with boosting the necessary compute resources and scaling out the workstation infrastructure to HPC cluster technology. To help customers scale their compute power to better accommodate the escalating demands of increasingly complex simulation models, HPE is introducing the new HPE ANSYS solution for CAE for the manufacturing industry.

The HPE ANSYS solution for CAE is uniquely designed to handle large data sets and address the design prototyping needs of both large enterprises and mid-market customers. Pre-tested on the HPE Apollo 2000, this solution delivers improved application performance, faster resource provisioning and lower total cost of ownership. This includes completing CAE simulation in days compared to weeks, four times higher resource utilization with HPC cluster compared to workstation based solutions and up to two times more compute capacity per square foot.

“HPC is rapidly becoming one of the essential ingredients for the digital economy across organizations of all types and sizes,” said Bill Mannel, vice president and general manager, HPC, Big Data and IoT Servers, HPE. “Today we are excited to unveil new additions to our recently announced portfolio capabilities designed to accelerate accessibility and time to value for HPC by significantly simplifying deployments, streamlining management and boosting performance.”

Pricing and Availability

  • The HPE Core HPC Software Stack with HPE Insight Cluster Management Utility V8.0 is now available for download.
  • The HPE ANSYS Solution for CAE is available now through HPE and worldwide channel partners. The CAE software is available through ANSYS.
  • HPE Apollo 6000 systems with new HPE ProLiant XL260a server trays will be available in September through HPE and worldwide channel partners.
  • The Intel Omin-Path Architecture is now available for initial support on HPE Apollo 6000 and HPE Apollo 2000 systems.
  • Mellanox’s EDR 100Gb/s InfiniBand solution is now available for initial support on HPE Apollo 6000 and HPE Apollo 2000 systems.

About Hewlett Packard Enterprise

Hewlett Packard Enterprise is an industry leading technology company that enables customers to go further, faster. With the industry’s most comprehensive portfolio, spanning the cloud to the data center to workplace applications, our technology and services help customers around the world make IT more efficient, more productive and more secure.


Source: HPE

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Is Amazon’s Plunge into Server Chips a Watershed Moment?

December 11, 2018

For several years now the big cloud providers – Amazon, Microsoft Azure, Google, et al – have been transforming from technology consumers into technology creators in hardware and software. The most recent example bei Read more…

By John Russell

Mellanox Uses Univa to Extend Silicon Design HPC Operation to Azure

December 11, 2018

Call it a corollary to Murphy’s Law: When a system is most in demand, when end users are most dependent on the system performing as required, when it’s crunch time – that’s when the system is most likely to blow up. Or make you wait in line to use it. Read more…

By Doug Black

Clemson’s Cautionary Cryptomining Tale

December 11, 2018

In some ways, the bigger the computer, the more vulnerable it is to cryptomining as Clemson University discovered after cryptominers dug into its Palmetto supercomputer. When a number of nodes on Clemson University’s P Read more…

By Staff

HPE Extreme Performance Solutions

AI Can Be Scary. But Choosing the Wrong Partners Can Be Mortifying!

As you continue to dive deeper into AI, you will discover it is more than just deep learning. AI is an extremely complex set of machine learning, deep learning, reinforcement, and analytics algorithms with varying compute, storage, memory, and communications needs. Read more…

IBM Accelerated Insights

Blurring the Lines Between HPC and AI @ SC18

The dominant topic at SC18 was the convergence of HPC and Artificial Intelligence (AI) with some of the biggest research and enterprise HPC users providing perspectives on how HPC and AI are moving closer together. Read more…

Data West Brings Technology Leaders to SDSC

December 6, 2018

Data and technology enthusiasts from around the world descended upon the San Diego Supercomputing Center (SDSC) for the third annual Data West conference, which is taking place this week on the campus of the University o Read more…

By Alex Woodie

Topology Can Help Us Find Patterns in Weather

December 6, 2018

Topology--–the study of shapes-- seems to be all the rage. You could even say that data has shape, and shape matters. Shapes are comfortable and familiar conc Read more…

By James Reinders

Zettascale by 2035? China Thinks So

December 6, 2018

Exascale machines (of at least a 1 exaflops peak) are anticipated to arrive by around 2020, a few years behind original predictions; and given extreme-scale performance challenges are not getting any easier, it makes sense that researchers are already looking ahead to the next big 1,000x performance goal post: zettascale computing. Read more…

By Tiffany Trader

Robust Quantum Computers Still a Decade Away, Says Nat’l Academies Report

December 5, 2018

The National Academies of Science, Engineering, and Medicine yesterday released a report – Quantum Computing: Progress and Prospects – whose optimism about Read more…

By John Russell

Revisiting the 2008 Exascale Computing Study at SC18

November 29, 2018

A report published a decade ago conveyed the results of a study aimed at determining if it were possible to achieve 1000X the computational power of the the Read more…

By Scott Gibson

AWS Debuts Lustre as a Service, Accelerates Data Transfer

November 28, 2018

From the Amazon re:Invent main stage in Las Vegas today, Amazon Web Services CEO Andy Jassy introduced Amazon FSx for Lustre, citing a growing body of applicati Read more…

By Tiffany Trader

AWS Launches First Arm Cloud Instances

November 28, 2018

AWS, a macrocosm of the emerging high-performance technology landscape, wants to be everywhere you want to be and offer everything you want to use (or at least Read more…

By Doug Black

Move Over Lustre & Spectrum Scale – Here Comes BeeGFS?

November 26, 2018

Is BeeGFS – the parallel file system with European roots – on a path to compete with Lustre and Spectrum Scale worldwide in HPC environments? Frank Herold Read more…

By John Russell

DOE Under Secretary for Science Paul Dabbar Interviewed at SC18

November 21, 2018

During the 30th annual SC conference in Dallas last week, SC18 hosted U.S. Department of Energy Under Secretary for Science Paul M. Dabbar. In attendance Nov. 13-14, Dabbar delivered remarks at the Top500 panel, met with a number of industry stakeholders and toured the show floor. He also met with HPCwire for an interview, where we discussed the role of the DOE in advancing leadership computing. Read more…

By Tiffany Trader

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Google Releases Machine Learning “What-If” Analysis Tool

September 12, 2018

Training machine learning models has long been time-consuming process. Yesterday, Google released a “What-If Tool” for probing how data point changes affect a model’s prediction. The new tool is being launched as a new feature of the open source TensorBoard web application... Read more…

By John Russell

The Convergence of Big Data and Extreme-Scale HPC

August 31, 2018

As we are heading towards extreme-scale HPC coupled with data intensive analytics like machine learning, the necessary integration of big data and HPC is a curr Read more…

By Rob Farber

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This