HPC Lessons for the Wider Enterprise World

By Nicole Hemsoth

January 28, 2014

Is HPC so specialized that the lessons learned from large-scale infrastructure (at all layers) are not transferrable to mirrored challenges in large-scale enterprise settings?

Put another way, are the business-critical problems that companies tackle really so vastly different than the associated hardware and software issues that large supercomputing centers have already faced and in many areas, overcome? Granted, there is already a significant amount of HPC to be found in enterprise datacenters worldwide in a number of areas—oil and gas, financial services, the life sciences, government and more. But as everything in technology seems bent on convergence, is there not a wider application for HPC-driven technologies in an expanding set of markets?

This is the first part of a series of focused pieces around these framing questions about HPC’s map into the wider world.  The sections of our extended special feature will target HPC-to-enterprise lessons in terms of hardware and infrastructure; software and applications; management at scale; cloud computing; big data; accelerators and more. But to kick things off, we wanted to build consensus around some of the main themes and ideas behind any movement that’s happening (or needs to) as HPC lessons trickle into the scale, efficiency, performance and data-conscious world of the modern enterprise.

In some circles, HPC is viewed from afar as an academic-only landscape, dotted with rare peaks representing actual enterprise use. Of course, those inside supercomputing know that this portrait is limited—that HPC has a strong foothold in the areas mentioned above, and tremendous potential to reshape new areas that either thought HPC was out of reach or are using HPC but simply don’t use the term. What is needed is a comprehensive view of how HPC can be broadly useful to critical segments enterprise IT…and that’s what we ntend to offer over the next couple of weeks.

The answer about whether or not there are a multitude of lessons HPC can teach the wider enterprise world, at least according to those we’ve spoken with for our the series on this subject, is resounding and positive. If there’s any disagreement, it’s on how those lessons translate, which are truly unique in the HPC experience, and of course, which hold the most promise for improved productivity, competitiveness or even application area.

Addison Snell, CEO of Intersect360 Research, whose research group follows the overlap between enterprise and HPC, made some parallels to put the question in context. “Traditionally, one of the characteristics that separated HPC from enterprise computing was that HPC featured jobs that would run to completion, and there would be a benefit in completing them faster, such as running a weather forecast, simulating a crash test, or searching for proteins that fit together with a given molecule.” However, he says by contrast, enterprise environments are designed to run in steady state (email systems, CRM databases, etc.). “HPC purchases would tend to be driven by performance, with relatively faster adoption of new technologies, while enterprise computing was driven by reliability and new technology adoption with slower technology adoption.”

“Early adopters and bellwethers in high performance computing are always the first to encounter new challenges as they push the limits of computation and data management,” Herb Schultz from IBM’s Technical Computing and Analytics group argued.  He says that many of the challenges faced in the world of high performance computing “later come to haunt the broader commercial IT community.” “How first movers respond to challenges with new technologies and improved techniques establishes a proven foundation that the next waves of users can exploit.”

As Fritz Ferstl, CTO at Univa told us, there are essentially three “divisions” of in the HPC industry. There are the national labs and big science organizations; enterprise commercial HPC (as found in the expected verticals, including oil and gas, financial services, life sciences, etc.); and there is “a third not often recognized as HPC but rather as data-centric analysis, also known as big data.”

Ferstl says that while the lab-level HPC category is “specific in that its leading edge requires tightly coupled architectures with the densest network interconnects, which drive up cost and complexity. They are geared toward running few ultra-large applications that demand aggregate memory and would take unacceptable amounts of runtime if not executed on such large systems.” One step away from this is the commercial sectors that rely on HPC for their competitive edge. Of these, Ferstl notes whether its new reservoirs of oil and gas being explored, next generation products like cars or airplanes being designed and tested, or innovative drugs being discovered, “there would be no progress in any of these cases and many more if it wasn’t for HPC as a key instrument for investigation, design, development, experimentation and validation.”

But final on his list—and crucial to the enterprise transition (and HPC’s lessons to teach it) is the heavy subject of data. What’s really driving this forward motion of HPC tech into the enterprise is that buzzword we just can’t get away from these days. Some might argue that the trend has actually been one of the best things that’s happened for HPC’s ability to propel into the wider enterprise world.

Snell commented that, “today, especially with big data analytics, more companies are encountering performance-sensitive applications that run to completion—at least in terms of iterations.” He said his research has revealed that new categories of non-HPC enterprise users are emerging, all of whom are considering performance and scalability as top purchase criteria. “In some cases,” he said, “these enterprises can be just as likely to explore new technologies as HPC users have been for years.”

Some argue that in general, aside from being a question of data pressures, business need, and competitive edge, the real lessons HPC can teach are about talent and R&D capability. As Paul Dlugosch, Automata product director at Micron described, “One of the first lessons that come to mind is that people matter. While the HPC industry often celebrates our accomplishments on the basis of technical and performance benchmarks, the cost of achieving those benchmarks are often not discussed.  The cost of system and semiconductor development can be easy enough to quantify.  It is far more difficult, though, to determine the ‘use’ cost of advanced technologies. “While the raw power of our semiconductors and systems is immense it is the organic part of the system, the human being– that is emerging as a significant bottleneck,” said Dlugosch.

“Fully exploiting the parallelism that exists in many high performance computing systems continues to absorb incredible amounts of human resources,” he argued. “Given the large scale of commercial/enterprise data centers, it is just as important to pay close attention to this human factor.  The HPC industry is certainly aware of this problem and is developing new architectures, tools and methodologies to improve human productivity. As commercial and enterprise data centers grow in capability and scale it will become just as important to consider the productivity of the humans involved in system programming, management and scaling.”

It should be noted that on any level of this question, it’s not a clear matter of teaching from the top to bottom. While HPC has solved a number of problems in some of the most challenging data and compute environment, especially in terms of scale, data movement, application complexity and elsewhere, there are elements that can filter from the enterprise setting to HPC—even that “big national lab” variety Ferstl describes.

There is general agreement that there are multiple lessons that high performance computing can carry into mainstream enterprise environments, no matter what vertical is involved. But on the flipside, there has been general agreement that many innovations are spinning out of the new class of enterprise environments—that the web scale companies with their bare-bones hardware running open source, natively developed, and purpose-built, nimble applications—have something to offer the supercomputing world as well.

Jason Stowe, CEO of HPC cloud company, Cycle Computing put it best when he told us, “We in HPC pay attention to the fastest systems in the world: the fastest CPUs, interconnects, and benchmarks. From petaflops to petabytes, we [in HPC] publish and analyze these numbers unlike any other industry…While we’ll continue to measure things like LINPACK, utilization, and queue wait times, we’re now looking at things like Dollars per Unit Science, and Dollar per Simulation, which ironically, are lessons that has been learned from enterprise.”

From the people who power both enterprise and HPC systems to the functional elements of the machines and how they differ, there are just as many new questions that emerge from the first—what can HPC lend to large-scale business operations?

Stay tuned over the next two weeks as this series expands and hones in on specific issues and topics that influence how enterprises will look to HPC for answers to solving scale, data, management and other challenges.

CONTINUE to PART II — “HPC Roots Feed Big Data Branches”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM Demonstrates Deep Neural Network Training with Analog Memory Devices

June 18, 2018

From smarter, more personalized apps to seemingly-ubiquitous Google Assistant and Alexa devices, AI adoption is showing no signs of slowing down – and yet, the hardware used for AI is far from perfect. Currently, GPUs Read more…

By Oliver Peckham

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

Challenges Face Astroinformatics as It Sorts Through the Stars

June 15, 2018

You might have seen one of those YouTube videos: they begin on Earth, slowly zooming out to the Moon, the Solar System, the Milky Way, beyond – and suddenly, you’re looking at trillions of stars. It’s a lot to take Read more…

By Oliver Peckham

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

IBM Accelerated Insights

Banks Boost Infrastructure to Tackle GDPR

As banks become more digital and data-driven, their IT managers are challenged with fast growing data volumes and lines-of-businesses’ (LoBs’) seemingly limitless appetite for analytics. Read more…

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

Xiaoxiang Zhu Receives the 2018 PRACE Ada Lovelace Award for HPC

June 13, 2018

Xiaoxiang Zhu, who works for the German Aerospace Center (DLR) and Technical University of Munich (TUM), was awarded the 2018 PRACE Ada Lovelace Award for HPC for her outstanding contributions in the field of high performance computing (HPC) in Europe. Read more…

By Elizabeth Leake

U.S Considering Launch of National Quantum Initiative

June 11, 2018

Sometime this month the U.S. House Science Committee will introduce legislation to launch a 10-year National Quantum Initiative, according to a recent report by Read more…

By John Russell

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

Exascale USA – Continuing to Move Forward

June 6, 2018

The end of May 2018, saw several important events that continue to advance the Department of Energy’s (DOE) Exascale Computing Initiative (ECI) for the United Read more…

By Alex R. Larzelere

Exascale for the Rest of Us: Exaflops Systems Capable for Industry

June 6, 2018

Enterprise advanced scale computing – or HPC in the enterprise – is an entity unto itself, situated between (and with characteristics of) conventional enter Read more…

By Doug Black

Fracas in Frankfurt: ISC18 Cluster Competition Teams Unveiled

June 6, 2018

The Student Cluster Competition season heats up with the seventh edition of the ISC Student Cluster Competition, slated to begin on June 25th in Frankfurt, Germ Read more…

By Dan Olds

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

HPE Wins $57 Million DoD Supercomputing Contract

February 20, 2018

Hewlett Packard Enterprise (HPE) today revealed details of its massive $57 million HPC contract with the U.S. Department of Defense (DoD). The deal calls for HP Read more…

By Tiffany Trader

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Sympo Read more…

By Staff

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

Google I/O 2018: AI Everywhere; TPU 3.0 Delivers 100+ Petaflops but Requires Liquid Cooling

May 9, 2018

All things AI dominated discussion at yesterday’s opening of Google’s I/O 2018 developers meeting covering much of Google's near-term product roadmap. The e Read more…

By John Russell

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Part One: Deep Dive into 2018 Trends in Life Sciences HPC

March 1, 2018

Life sciences is an interesting lens through which to see HPC. It is perhaps not an obvious choice, given life sciences’ relative newness as a heavy user of H Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Google Charts Two-Dimensional Quantum Course

April 26, 2018

Quantum error correction, essential for achieving universal fault-tolerant quantum computation, is one of the main challenges of the quantum computing field and it’s top of mind for Google’s John Martinis. At a presentation last week at the HPC User Forum in Tucson, Martinis, one of the world's foremost experts in quantum computing, emphasized... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This