HP Launches HPC & Big Data Global Business Unit

By John Russell

June 4, 2015

When HP finally divides into two pieces – HP Inc. (PCs and printers) and Hewlett Packard Enterprise (servers and services) – how will the HPC portfolio fare? Views vary of course. The split is meant to let the ‘new’ companies shed distraction and sharpen focus. HPC will live within HP Enterprise, but perhaps surprisingly not by itself. Instead HPC is being combined with Big Data into a single global business unit, HPC & Big Data, created in March and led by longtime SGI executive and recent HP import, William Mannel.

“It’s not by mistake or coincidence we put HPC and Big Data together,” said Mannel, now VP and General Manager of the HPC & Big Data GBU. “We believe storing Big Data is one thing and we have technologies to do that. Getting productive use out of [the data] is another thing and many customers are using similar types of technologies to get value out of their Big Data.”

Hired last November and a key architect of the new GBU, Mannel discussed with HPCwire the HP strategy for expanding its HPC focus, why the timing is right to push into the enterprise, what some of the obstacles (and solutions) are, and the steady rise of new technologies – x86 still dominates (including at HP) but competitors (GPU, FPGA, OpenPower, ARM) are winning sockets in a trend likely to continue.

Don’t get the wrong idea. HP is hardly abandoning the HPC stratosphere. It still holds the overall lead in Top 500 systems (Nov. 2014) with 179 (36 percent) compared to IBM with 153 systems (30 percent) according to Top 500 organizers. It should be noted those numbers are down slightly for both companies. HP had 182 systems (36.4 percent) six months earlier, and IBM had 176 systems (35.2 percent).

“By any of the metrics you want to use we are already HPC leaders. We know we’re under represented in the top 100 and are expecting to drive forward into that portion of the market as well. However we think there’s additional opportunity to grow in the enterprise and that’s why we created a global business unit specifically to focus on HPC and Dig Data.”

This notion of aligning HPC and Big Data has steadily gained traction. Many see a growing trend (or at least an obvious desire) by enterprises of all sizes to capitalize on Big Data (internally generated or externally available). Add to this the effort by vendors to evangelize and sell HPC to small and medium business seeking to differentiate themselves using traditionally HPC-dependent tools (modeling, simulation, etc.) and suddenly the enterprise HPC market looks enticing and big.

For the most part the analyst community seems to agree.

Market watcher IDC, which has adopted the HPDA acronym (high performance data analysis), has reported 67 percent of HPC sites use HPDA today and forecast the 2016 HPDA server and storage market at $1.2B and $800M respectively and growing faster than most segments.

Addison Snell, CEO, Intersect360, said, “HP’s strategy to combine HPC and Big Data internally is consistent with the industry dynamics we see, where there are now large categories of enterprise applications that are reliant on performance and scalability. One of HP’s strengths is its position in high-performance storage, and the company will want to leverage that. IBM has already started down this path, but the major enterprise storage vendors — NetApp, EMC, and HDS — are all missing it.”

“HP and Dell have both seen their share of HPC servers increase as a direct result of IBM, previously the clear #1, selling its x86 server business to Lenovo. While Lenovo will continue to sell into the HPC market, the long disruption to the IBM sales process opened up the door for competition. Now HP and Dell are nearly deadlocked for the HPC server market share lead,” said Snell, adding he thinks the HP split will further energize HP’s HPC efforts.

Apollo is the product line underpinning the HPC cum Big Data gambit. Introduced last June, the Apollo line spans supercomputing to the datacenter. The 8000 and 6000 were first to market, targeting the high end. Last month, HP announced the 4000 (big data) and 2000 (entry level, datacenter) additions to the line.

Apollo Data CenterOutside the enterprise, HP has already racked up impressive wins. The first implementation of the Apollo platform (8000 machines) was by DOE’s National Renewable Energy Laboratory (NREL). NREL worked with Intel and HP to build Peregrine, a warm-water, liquid-cooled supercomputer. (The warm water is reused to heat the building after cooling the computer.)

Peregrine has 6,912 Intel Xeon E5-2670 “SandyBridge” processor cores, 24,192 Intel Xeon E5-2695v2 “IvyBridge” processor cores for a total of 31,104 Intel Xeon processor cores, providing a total of about 608 TeraFLOPS or Trillion floating point calculations per second. Peregrine also has 576 Intel Phi many-core co-processors with an aggregate performance of about 582 TeraFLOPS.  In total Peregrine is capable of 1.19 PetaFLOPS.

In April of this year, AGH University of Science and Technology in Krakow brought online its Prometheus supercomputer. Packing 1.7 Petaflops of peak computational performance, the HP-built machine is the most powerful supercomputer in the history of Poland and the world’s largest installation of HP Apollo 8000 servers. The 30 metric ton machine houses 1,728 HP Apollo 8000 InfiniBand-connected servers inside 15 racks.

High profile wins such as these, HP hopes, will create a buzz around the entire Apollo line including its more recent members targeted at Big Data and datacenter activities. Here is a brief snapshot of the Apollo platform:

  • Apollo 2000 is the entry offering. It’s available with up to four servers in 2U chassis, uses Intel Xeon ES-2600 processors, and supports as many as 24 drives per node. “You can use one as the head node and the other three as computes nodes,” says Mannel.
  • Apollo 4000 (three systems in the line, 4200, 4530, 4510) is aimed squarely at Big Data and the datacenter. Mannel noted, “It’s a Big Data Platform specifically used for matching compute with a lot of storage. It’s not a RAID box but it is a storage server with a number of different configurations.”
  • Apollo 6000 & 8000 as noted earlier target large-scale systems in technical and scientific computing. The users, according to Mannel, need hundreds to thousands of cores. Xeon E3 and E5 processors are used throughout the line and top models have two accelerator slots that support Xeon Phi.

Time will tell if this is the right product mix. Currently much of the enterprise market is sluggish. HP’s most recent financial results, released May 22, revealed Enterprise Group revenue was down 1% year over year with a 14.5% operating margin. Industry standard servers revenue was up 11%, but storage (8%), business critical systems (15%), networking (16%) and technology services (8%) were all down. Likewise, Enterprise services (16%), infrastructure technology outsourcing (20%), and application and business services revenue (8%) all declined.

Market fluctuations aside, the low hanging fruit would seem to be dual-use opportunities in large industries where HPC is already established such as the auto industry.

“Buy an auto today the thing itself is a big data producer. It uploads all this data, which gets collated and collected and analyzed from quality standpoint from a driver preference standpoint. Many of the big auto manufacturers have big data projects at the same time they are using their HPC resources for more of the standard structural analysis and crash analysis and fluid dynamics types of analysis,” said Mannel.

Making the HPC & Big Data gambit work for companies less experienced in HPC and with fewer computational resources will be challenging. For starters, adopting HPC isn’t easy. Complicated systems management, new programming techniques, tricky application software, power & space requirements, and unfamiliar architectures can quickly confound new-to-HPC users.

HP understands the challenges, contended Mannel, and has an effective strategy.

“One core bottleneck is that HPC has been so generic. What’s needed is a solution approach in which the right application with right configuration of the hardware and the right level of management and usability that make it acceptable to customers. I think that’s one area where the HPC market has struggled,” he said.

Within the HPC & Big Data GBU is a formal HPC Pursuit Group, which includes a team of applications engineers who work with end user customers, ISVs, and the open source community to ensure applications run well on HP hardware and for the customer’s specific application and workload. The group works with customers pre- and post sale to “make sure they are getting the performance they want.”

It’s also important to offer alternative access routes to HPC resources (e.g. cloud) said Mannel both for pilot programs and production environments.

“We provide for a number of very large customers as well as smaller customers the ability to essentially get HPC on tap. If they don’t want to build their own HPC datacenter because of space constraints or lack of expertise, we’ll set up customers with access to HPC resources we control. It can be done on or off customer premise; they just pay a monthly bill,” Mannel said.

For now, HP’s key target markets are pretty vannila: 1) Oil & Gas, a long-term HPC user and frequent adopter leading advances; 2) Manufacturing, “[HP] has expertise in computing and engineering work which tends to be engineering simulation more than anything else;” 3) Financial services, again no surprise; and 4) Life sciences, which “is an emerging market for us. It’s a place where we are going to put more investment and expertise over time. It was a past focus but we are recommitting to it.”

Currently x86 technology dominates the HPC portfolio but its preeminence is being slowly chipped away. In terms of balancing the portfolio by incorporating various technologies, there is a shift going on – not that x86 is less important but that other technologies are also becoming important and taking a greater share.

“That’s definitely fair,” said Mannel. “You’re seeing a massive variety of different technologies and techniques coming onboard. Here’s an example. Nearly ten years ago I led a big effort around using FPGAs for HPC. We had a little success but in the end it was only for a few defense applications. It was just hard to program FPGAs.

“Come forward ten years and I’ve talked to some service providers who are swearing by using FPGAs. They are saying, ‘We did the [programming] work, are getting acceleration, and are very happy with the results.’

“I think that’s a really common experience. Now there’s GPUs out there, Intel Phi, high-performance interconnects, new programming approaches. It’s all coming about because just waiting for the latest rev of the [next Intel] chip is not creating the level of performance, price performance, and performance per watt that customers are expecting,” Noted Mannel

Intel haters should not get giddy. No sea change is expected soon, certainly not at HP, but change is now part of the technology selection conversation. An HP example is its Moonshot system aimed at cloud and more traditional datacenter functionality. Moonshot has a power-stingy system-on-chip ARM processor in its arsenal. Along with other technology, the ARM processor allows Moonshot to have a small space and small power footprint.

“I use a phrase, ‘[The] right compute at the right time for right data.’ For a long time, the x86 was just that. You could get a very wide range of applications that ran very well on x86 architecture. Now you are starting to see with some of these new applications that you can get better performance using different technologies. We tend to invest in new technologies that fit our capabilities as a company, our drivers, and also offer strong value proposition for customers.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC22 Unveils ACM Gordon Bell Prize Finalists

August 12, 2022

Courtesy of the schedule for the SC22 conference, we now have our first glimpse at the finalists for this year’s coveted Gordon Bell Prize. The Gordon Bell Prize, of course, comes with an award of $10,000 courtesy of H Read more…

Q&A with ORNL’s Bronson Messer, an HPCwire Person to Watch in 2022

August 12, 2022

HPCwire presents our interview with Bronson Messer, distinguished scientist and director of Science at the Oak Ridge Leadership Computing Facility (OLCF), ORNL, and an HPCwire 2022 Person to Watch. Messer recaps ORNL's journey to exascale and sheds light on how all the pieces line up to support the all-important science. Also covered are the role... Read more…

TACC Simulations Probe the First Days of Stars, Black Holes

August 12, 2022

The stunning images produced by the James Webb Space Telescope and recent supercomputer-enabled black hole imaging efforts have brought the early days of the universe quite literally into sharp focus. Researchers from th Read more…

Google Program to Free Chips Boosts University Semiconductor Design

August 11, 2022

A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…

Argonne Deploys Polaris Supercomputer for Science in Advance of Aurora

August 9, 2022

Argonne National Laboratory has made its newest supercomputer, Polaris, available for scientific research. The system, which ranked 14th on the most recent Top500 list, is serving as a testbed for the exascale Aurora system slated for delivery in the coming months. The HPE-built Polaris system (pictured in the header) consists of 560 nodes... Read more…

AWS Solution Channel

Shutterstock 1519171757

Running large-scale CFD fire simulations on AWS for Amazon.com

This post was contributed by Matt Broadfoot, Senior Fire Strategy Manager at Amazon Design and Construction, and Antonio Cennamo ProServe Customer Practice Manager, Colin Bridger Principal HPC GTM Specialist, Grigorios Pikoulas ProServe Strategic Program Leader, Neil Ashton Principal, Computational Engineering Product Strategy, Roberto Medar, ProServe HPC Consultant, Taiwo Abioye ProServe Security Consultant, Talib Mahouari ProServe Engagement Manager at AWS. Read more…

Microsoft/NVIDIA Solution Channel

Shutterstock 1689646429

Gain a Competitive Edge using Cloud-Based, GPU-Accelerated AI KYC Recommender Systems

Financial services organizations face increased competition for customers from technologies such as FinTechs, mobile banking applications, and online payment systems. To meet this challenge, it is important for organizations to have a deep understanding of their customers. Read more…

US CHIPS and Science Act Signed Into Law

August 9, 2022

Just a few days after it was passed in the Senate, the U.S. CHIPS and Science Act has been signed into law by President Biden. In a ceremony today, Biden signed and lauded the ambitious piece of legislation, which over the course of the legislative process broadened to include hundreds of billions in additional science and technology spending. He was flanked by Speaker... Read more…

Q&A with ORNL’s Bronson Messer, an HPCwire Person to Watch in 2022

August 12, 2022

HPCwire presents our interview with Bronson Messer, distinguished scientist and director of Science at the Oak Ridge Leadership Computing Facility (OLCF), ORNL, and an HPCwire 2022 Person to Watch. Messer recaps ORNL's journey to exascale and sheds light on how all the pieces line up to support the all-important science. Also covered are the role... Read more…

Google Program to Free Chips Boosts University Semiconductor Design

August 11, 2022

A Google-led program to design and manufacture chips for free is becoming popular among researchers and computer enthusiasts. The search giant's open silicon program is providing the tools for anyone to design chips, which then get manufactured. Google foots the entire bill, from a chip's conception to delivery of the final product in a user's hand. Google's... Read more…

Argonne Deploys Polaris Supercomputer for Science in Advance of Aurora

August 9, 2022

Argonne National Laboratory has made its newest supercomputer, Polaris, available for scientific research. The system, which ranked 14th on the most recent Top500 list, is serving as a testbed for the exascale Aurora system slated for delivery in the coming months. The HPE-built Polaris system (pictured in the header) consists of 560 nodes... Read more…

US CHIPS and Science Act Signed Into Law

August 9, 2022

Just a few days after it was passed in the Senate, the U.S. CHIPS and Science Act has been signed into law by President Biden. In a ceremony today, Biden signed and lauded the ambitious piece of legislation, which over the course of the legislative process broadened to include hundreds of billions in additional science and technology spending. He was flanked by Speaker... Read more…

12 Midwestern Universities Team to Boost Semiconductor Supply Chain

August 8, 2022

The combined stressors of Covid-19 and the invasion of Ukraine have sent every major nation scrambling to reinforce its mission-critical supply chains – including and in particular the semiconductor supply chain. In the U.S. – which, like much of the world, relies on Asia for its semiconductors – those efforts have taken shape through the recently... Read more…

Quantum Pioneer D-Wave Rings NYSE Bell, Begins Life as Public Company

August 8, 2022

D-Wave Systems, one of the early quantum computing pioneers, has completed its SPAC deal to go public. Its merger with DPCM Capital was completed last Friday, and today, D-Wave management rang the bell on the New York Stock Exchange. It is now trading under two ticker symbols – QBTS and QBTS WS (warrant shares), respectively. Welcome to the public... Read more…

Supercomputer Models Explosives Critical for Nuclear Weapons

August 6, 2022

Lawrence Livermore National Laboratory (LLNL) is one of the laboratories that operates under the auspices of the National Nuclear Security Administration (NNSA), which manages the United States’ stockpile of nuclear weapons. Amid major efforts to modernize that stockpile, LLNL has announced that researchers from its own Energetic Materials Center... Read more…

SEA Changes: How EuroHPC Is Preparing for Exascale

August 5, 2022

Back in June, the EuroHPC Joint Undertaking – which serves as the EU’s concerted supercomputing play – announced its first exascale system: JUPITER, set to be installed by the Jülich Supercomputing Centre (FZJ) in 2023. But EuroHPC has been preparing for the exascale era for a much longer time: eight months... Read more…

Nvidia R&D Chief on How AI is Improving Chip Design

April 18, 2022

Getting a glimpse into Nvidia’s R&D has become a regular feature of the spring GTC conference with Bill Dally, chief scientist and senior vice president of research, providing an overview of Nvidia’s R&D organization and a few details on current priorities. This year, Dally focused mostly on AI tools that Nvidia is both developing and using in-house to improve... Read more…

Royalty-free stock illustration ID: 1919750255

Intel Says UCIe to Outpace PCIe in Speed Race

May 11, 2022

Intel has shared more details on a new interconnect that is the foundation of the company’s long-term plan for x86, Arm and RISC-V architectures to co-exist in a single chip package. The semiconductor company is taking a modular approach to chip design with the option for customers to cram computing blocks such as CPUs, GPUs and AI accelerators inside a single chip package. Read more…

The Final Frontier: US Has Its First Exascale Supercomputer

May 30, 2022

In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…

US Senate Passes CHIPS Act Temperature Check, but Challenges Linger

July 19, 2022

The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…

Top500: Exascale Is Officially Here with Debut of Frontier

May 30, 2022

The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…

Newly-Observed Higgs Mode Holds Promise in Quantum Computing

June 8, 2022

The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…

AMD’s MI300 APUs to Power Exascale El Capitan Supercomputer

June 21, 2022

Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…

PsiQuantum’s Path to 1 Million Qubits

April 21, 2022

PsiQuantum, founded in 2016 by four researchers with roots at Bristol University, Stanford University, and York University, is one of a few quantum computing startups that’s kept a moderately low PR profile. (That’s if you disregard the roughly $700 million in funding it has attracted.) The main reason is PsiQuantum has eschewed the clamorous public chase for... Read more…

Leading Solution Providers

Contributors

ISC 2022 Booth Video Tours

AMD
AWS
DDN
Dell
Intel
Lenovo
Microsoft
PENGUIN SOLUTIONS

Exclusive Inside Look at First US Exascale Supercomputer

July 1, 2022

HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…

AMD Opens Up Chip Design to the Outside for Custom Future

June 15, 2022

AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…

Intel Reiterates Plans to Merge CPU, GPU High-performance Chip Roadmaps

May 31, 2022

Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…

Nvidia, Intel to Power Atos-Built MareNostrum 5 Supercomputer

June 16, 2022

The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…

India Launches Petascale ‘PARAM Ganga’ Supercomputer

March 8, 2022

Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…

Is Time Running Out for Compromise on America COMPETES/USICA Act?

June 22, 2022

You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…

AMD Lines Up Alternate Chips as It Eyes a ‘Post-exaflops’ Future

June 10, 2022

Close to a decade ago, AMD was in turmoil. The company was playing second fiddle to Intel in PCs and datacenters, and its road to profitability hinged mostly on Read more…

Exascale Watch: Aurora Installation Underway, Now Open for Reservations

May 10, 2022

Installation has begun on the Aurora supercomputer, Rick Stevens (associate director of Argonne National Laboratory) revealed today during the Intel Vision event keynote taking place in Dallas, Texas, and online. Joining Intel exec Raja Koduri on stage, Stevens confirmed that the Aurora build is underway – a major development for a system that is projected to deliver more... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire