HPC Lessons for the Wider Enterprise World

By Nicole Hemsoth

January 28, 2014

Is HPC so specialized that the lessons learned from large-scale infrastructure (at all layers) are not transferrable to mirrored challenges in large-scale enterprise settings?

Put another way, are the business-critical problems that companies tackle really so vastly different than the associated hardware and software issues that large supercomputing centers have already faced and in many areas, overcome? Granted, there is already a significant amount of HPC to be found in enterprise datacenters worldwide in a number of areas—oil and gas, financial services, the life sciences, government and more. But as everything in technology seems bent on convergence, is there not a wider application for HPC-driven technologies in an expanding set of markets?

This is the first part of a series of focused pieces around these framing questions about HPC’s map into the wider world.  The sections of our extended special feature will target HPC-to-enterprise lessons in terms of hardware and infrastructure; software and applications; management at scale; cloud computing; big data; accelerators and more. But to kick things off, we wanted to build consensus around some of the main themes and ideas behind any movement that’s happening (or needs to) as HPC lessons trickle into the scale, efficiency, performance and data-conscious world of the modern enterprise.

In some circles, HPC is viewed from afar as an academic-only landscape, dotted with rare peaks representing actual enterprise use. Of course, those inside supercomputing know that this portrait is limited—that HPC has a strong foothold in the areas mentioned above, and tremendous potential to reshape new areas that either thought HPC was out of reach or are using HPC but simply don’t use the term. What is needed is a comprehensive view of how HPC can be broadly useful to critical segments enterprise IT…and that’s what we ntend to offer over the next couple of weeks.

The answer about whether or not there are a multitude of lessons HPC can teach the wider enterprise world, at least according to those we’ve spoken with for our the series on this subject, is resounding and positive. If there’s any disagreement, it’s on how those lessons translate, which are truly unique in the HPC experience, and of course, which hold the most promise for improved productivity, competitiveness or even application area.

Addison Snell, CEO of Intersect360 Research, whose research group follows the overlap between enterprise and HPC, made some parallels to put the question in context. “Traditionally, one of the characteristics that separated HPC from enterprise computing was that HPC featured jobs that would run to completion, and there would be a benefit in completing them faster, such as running a weather forecast, simulating a crash test, or searching for proteins that fit together with a given molecule.” However, he says by contrast, enterprise environments are designed to run in steady state (email systems, CRM databases, etc.). “HPC purchases would tend to be driven by performance, with relatively faster adoption of new technologies, while enterprise computing was driven by reliability and new technology adoption with slower technology adoption.”

“Early adopters and bellwethers in high performance computing are always the first to encounter new challenges as they push the limits of computation and data management,” Herb Schultz from IBM’s Technical Computing and Analytics group argued.  He says that many of the challenges faced in the world of high performance computing “later come to haunt the broader commercial IT community.” “How first movers respond to challenges with new technologies and improved techniques establishes a proven foundation that the next waves of users can exploit.”

As Fritz Ferstl, CTO at Univa told us, there are essentially three “divisions” of in the HPC industry. There are the national labs and big science organizations; enterprise commercial HPC (as found in the expected verticals, including oil and gas, financial services, life sciences, etc.); and there is “a third not often recognized as HPC but rather as data-centric analysis, also known as big data.”

Ferstl says that while the lab-level HPC category is “specific in that its leading edge requires tightly coupled architectures with the densest network interconnects, which drive up cost and complexity. They are geared toward running few ultra-large applications that demand aggregate memory and would take unacceptable amounts of runtime if not executed on such large systems.” One step away from this is the commercial sectors that rely on HPC for their competitive edge. Of these, Ferstl notes whether its new reservoirs of oil and gas being explored, next generation products like cars or airplanes being designed and tested, or innovative drugs being discovered, “there would be no progress in any of these cases and many more if it wasn’t for HPC as a key instrument for investigation, design, development, experimentation and validation.”

But final on his list—and crucial to the enterprise transition (and HPC’s lessons to teach it) is the heavy subject of data. What’s really driving this forward motion of HPC tech into the enterprise is that buzzword we just can’t get away from these days. Some might argue that the trend has actually been one of the best things that’s happened for HPC’s ability to propel into the wider enterprise world.

Snell commented that, “today, especially with big data analytics, more companies are encountering performance-sensitive applications that run to completion—at least in terms of iterations.” He said his research has revealed that new categories of non-HPC enterprise users are emerging, all of whom are considering performance and scalability as top purchase criteria. “In some cases,” he said, “these enterprises can be just as likely to explore new technologies as HPC users have been for years.”

Some argue that in general, aside from being a question of data pressures, business need, and competitive edge, the real lessons HPC can teach are about talent and R&D capability. As Paul Dlugosch, Automata product director at Micron described, “One of the first lessons that come to mind is that people matter. While the HPC industry often celebrates our accomplishments on the basis of technical and performance benchmarks, the cost of achieving those benchmarks are often not discussed.  The cost of system and semiconductor development can be easy enough to quantify.  It is far more difficult, though, to determine the ‘use’ cost of advanced technologies. “While the raw power of our semiconductors and systems is immense it is the organic part of the system, the human being– that is emerging as a significant bottleneck,” said Dlugosch.

“Fully exploiting the parallelism that exists in many high performance computing systems continues to absorb incredible amounts of human resources,” he argued. “Given the large scale of commercial/enterprise data centers, it is just as important to pay close attention to this human factor.  The HPC industry is certainly aware of this problem and is developing new architectures, tools and methodologies to improve human productivity. As commercial and enterprise data centers grow in capability and scale it will become just as important to consider the productivity of the humans involved in system programming, management and scaling.”

It should be noted that on any level of this question, it’s not a clear matter of teaching from the top to bottom. While HPC has solved a number of problems in some of the most challenging data and compute environment, especially in terms of scale, data movement, application complexity and elsewhere, there are elements that can filter from the enterprise setting to HPC—even that “big national lab” variety Ferstl describes.

There is general agreement that there are multiple lessons that high performance computing can carry into mainstream enterprise environments, no matter what vertical is involved. But on the flipside, there has been general agreement that many innovations are spinning out of the new class of enterprise environments—that the web scale companies with their bare-bones hardware running open source, natively developed, and purpose-built, nimble applications—have something to offer the supercomputing world as well.

Jason Stowe, CEO of HPC cloud company, Cycle Computing put it best when he told us, “We in HPC pay attention to the fastest systems in the world: the fastest CPUs, interconnects, and benchmarks. From petaflops to petabytes, we [in HPC] publish and analyze these numbers unlike any other industry…While we’ll continue to measure things like LINPACK, utilization, and queue wait times, we’re now looking at things like Dollars per Unit Science, and Dollar per Simulation, which ironically, are lessons that has been learned from enterprise.”

From the people who power both enterprise and HPC systems to the functional elements of the machines and how they differ, there are just as many new questions that emerge from the first—what can HPC lend to large-scale business operations?

Stay tuned over the next two weeks as this series expands and hones in on specific issues and topics that influence how enterprises will look to HPC for answers to solving scale, data, management and other challenges.

CONTINUE to PART II — “HPC Roots Feed Big Data Branches”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Intel Speeds NAMD by 1.8x: Saves Xeon Processor Users Millions of Compute Hours

August 12, 2020

Potentially saving datacenters millions of CPU node hours, Intel and the University of Illinois at Urbana–Champaign (UIUC) have collaborated to develop AVX-512 optimizations for the NAMD scalable molecular dynamics cod Read more…

By Rob Farber

Intel’s Optane/DAOS Solution Tops Latest IO500

August 11, 2020

Intel’s persistent memory technology, Optane, and its DAOS (Distributed Asynchronous Object Storage) stack continue to impress and gain market traction. Yesterday, Intel reported an Optane and DAOS-based system finishe Read more…

By John Russell

Summit Now Offers Virtual Tours

August 10, 2020

Summit, the second most powerful publicly ranked supercomputer in the world, now has a virtual tour. The tour, implemented by 3D platform Matterport, allows users to virtually “walk” around the massive supercomputer Read more…

By Oliver Peckham

Supercomputer Simulations Examine Changes in Chesapeake Bay

August 8, 2020

The Chesapeake Bay, the largest estuary in the continental United States, weaves its way south from Maryland, collecting waters from West Virginia, Delaware, DC, Pennsylvania and New York along the way. Like many major e Read more…

By Oliver Peckham

Student Success from ‘Scratch’: CHPC’s Proof is in the Pudding

August 7, 2020

Happy Sithole, who directs the South African Centre for High Performance Computing (SA-CHPC), called the 13th annual CHPC National conference to order on December 1, 2019, at the Birchwood Conference Centre in Kempton Pa Read more…

By Elizabeth Leake

AWS Solution Channel

University of Adelaide Provides Seamless Bioinformatics Training Using AWS

The University of Adelaide, established in South Australia in 1874, maintains a rich history of scientific innovation. For more than 140 years, the institution and its researchers have had an impact all over the world—making vital contributions to the invention of X-ray crystallography, insulin, penicillin, and the Olympic torch. Read more…

Intel® HPC + AI Pavilion

Supercomputing the Pandemic: Scientific Community Tackles COVID-19 from Multiple Perspectives

Since their inception, supercomputers have taken on the biggest, most complex, and most data-intensive computing challenges—from confirming Einstein’s theories about gravitational waves to predicting the impacts of climate change. Read more…

New GE Simulations on Summit to Advance Offshore Wind Power

August 6, 2020

The wind energy sector is a frequent user of high-power simulations, with researchers aiming to optimize wind flows and energy production from the massive turbines. Now, researchers at GE are preparing to undertake a lar Read more…

By Oliver Peckham

Intel’s Optane/DAOS Solution Tops Latest IO500

August 11, 2020

Intel’s persistent memory technology, Optane, and its DAOS (Distributed Asynchronous Object Storage) stack continue to impress and gain market traction. Yeste Read more…

By John Russell

Summit Now Offers Virtual Tours

August 10, 2020

Summit, the second most powerful publicly ranked supercomputer in the world, now has a virtual tour. The tour, implemented by 3D platform Matterport, allows use Read more…

By Oliver Peckham

Research: A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic

August 5, 2020

Within the past years, hardware vendors have started designing low precision special function units in response to the demand of the machine learning community Read more…

By Hartwig Anzt and Jack Dongarra

Implement Photonic Tensor Cores for Machine Learning?

August 5, 2020

Researchers from George Washington University have reported an approach for building photonic tensor cores that leverages phase change photonic memory to implem Read more…

By John Russell

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Machines, Connections, Data, and Especially People: OAC Acting Director Amy Friedlander Charts Office’s Blueprint for Innovation

August 3, 2020

The path to innovation in cyberinfrastructure (CI) will require continued focus on building HPC systems and secure connections between them, in addition to the Read more…

By Ken Chiacchia, Pittsburgh Supercomputing Center/XSEDE

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Supercomputer Modeling Tests How COVID-19 Spreads in Grocery Stores

April 8, 2020

In the COVID-19 era, many people are treating simple activities like getting gas or groceries with caution as they try to heed social distancing mandates and protect their own health. Still, significant uncertainty surrounds the relative risk of different activities, and conflicting information is prevalent. A team of Finnish researchers set out to address some of these uncertainties by... Read more…

By Oliver Peckham

Supercomputer-Powered Research Uncovers Signs of ‘Bradykinin Storm’ That May Explain COVID-19 Symptoms

July 28, 2020

Doctors and medical researchers have struggled to pinpoint – let alone explain – the deluge of symptoms induced by COVID-19 infections in patients, and what Read more…

By Oliver Peckham

Nvidia Said to Be Close on Arm Deal

August 3, 2020

GPU leader Nvidia Corp. is in talks to buy U.K. chip designer Arm from parent company Softbank, according to several reports over the weekend. If consummated Read more…

By George Leopold

Intel’s 7nm Slip Raises Questions About Ponte Vecchio GPU, Aurora Supercomputer

July 30, 2020

During its second-quarter earnings call, Intel announced a one-year delay of its 7nm process technology, which it says it will create an approximate six-month shift for its CPU product timing relative to prior expectations. The primary issue is a defect mode in the 7nm process that resulted in yield degradation... Read more…

By Tiffany Trader

Supercomputer Simulations Reveal the Fate of the Neanderthals

May 25, 2020

For hundreds of thousands of years, neanderthals roamed the planet, eventually (almost 50,000 years ago) giving way to homo sapiens, which quickly became the do Read more…

By Oliver Peckham

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

By Doug Black

Neocortex Will Be First-of-Its-Kind 800,000-Core AI Supercomputer

June 9, 2020

Pittsburgh Supercomputing Center (PSC - a joint research organization of Carnegie Mellon University and the University of Pittsburgh) has won a $5 million award Read more…

By Tiffany Trader

HPE Keeps Cray Brand Promise, Reveals HPE Cray Supercomputing Line

August 4, 2020

The HPC community, ever-affectionate toward Cray and its eponymous founder, can breathe a (virtual) sigh of relief. The Cray brand will live on, encompassing th Read more…

By Tiffany Trader

Leading Solution Providers

Contributors

Nvidia’s Ampere A100 GPU: Up to 2.5X the HPC, 20X the AI

May 14, 2020

Nvidia's first Ampere-based graphics card, the A100 GPU, packs a whopping 54 billion transistors on 826mm2 of silicon, making it the world's largest seven-nanom Read more…

By Tiffany Trader

Australian Researchers Break All-Time Internet Speed Record

May 26, 2020

If you’ve been stuck at home for the last few months, you’ve probably become more attuned to the quality (or lack thereof) of your internet connection. Even Read more…

By Oliver Peckham

15 Slides on Programming Aurora and Exascale Systems

May 7, 2020

Sometime in 2021, Aurora, the first planned U.S. exascale system, is scheduled to be fired up at Argonne National Laboratory. Cray (now HPE) and Intel are the k Read more…

By John Russell

‘Billion Molecules Against COVID-19’ Challenge to Launch with Massive Supercomputing Support

April 22, 2020

Around the world, supercomputing centers have spun up and opened their doors for COVID-19 research in what may be the most unified supercomputing effort in hist Read more…

By Oliver Peckham

Joliot-Curie Supercomputer Used to Build First Full, High-Fidelity Aircraft Engine Simulation

July 14, 2020

When industrial designers plan the design of a new element of a vehicle’s propulsion or exterior, they typically use fluid dynamics to optimize airflow and in Read more…

By Oliver Peckham

John Martinis Reportedly Leaves Google Quantum Effort

April 21, 2020

John Martinis, who led Google’s quantum computing effort since establishing its quantum hardware group in 2014, has left Google after being moved into an advi Read more…

By John Russell

$100B Plan Submitted for Massive Remake and Expansion of NSF

May 27, 2020

Legislation to reshape, expand - and rename - the National Science Foundation has been submitted in both the U.S. House and Senate. The proposal, which seems to Read more…

By John Russell

Google Cloud Debuts 16-GPU Ampere A100 Instances

July 7, 2020

On the heels of the Nvidia’s Ampere A100 GPU launch in May, Google Cloud is announcing alpha availability of the A100 “Accelerator Optimized” VM A2 instance family on Google Compute Engine. The instances are powered by the HGX A100 16-GPU platform, which combines two HGX A100 8-GPU baseboards using... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This