Towards Ubiquitous HPC — Passing HPC into the hands of every engineer and scientist

By Wolfgang Gentzsch, UberCloud

January 7, 2016

Countless case studies demonstrate impressively the importance of HPC for engineering and scientific insight, product innovation, and market competitiveness. But so far HPC was mostly in the hands of a relatively small elite crowd, not easily accessible by the large majority.  In this article, however, we argue that – despite the ever increasing complexity of HPC hardware and system components –engineers and scientists have never been this close to HPC, i.e. ubiquitous HPC, as a common tool, for everyone. The main reason for this advance can be seen in the continuous progress of HPC software tools which assist enormously in the design, development, and optimization of engineering and scientific applications. Now, we believe that the next chasm towards ubiquitous HPC will be crossed very soon by new software container technology which will dramatically facilitate software packageability and portability, ease the access and use, and simplify software maintenance and support, and which finally will pass HPC into the hands of every engineer and scientist.

First, a Little Container History

The Box Levinson“In April 1956, a refitted oil tanker carried fifty-eight shipping containers from Newark to Houston. From that modest beginning, container shipping developed into a huge industry that made the boom in global trade possible. “The Box” tells the dramatic story of the container’s creation, the decade of struggle before it was widely adopted, and the sweeping economic consequences of the sharp fall in transportation costs that containerization brought about. … Economist Marc Levinson shows how the container transformed economic geography. … By making shipping so cheap that industry could locate factories far from its customers, the container paved the way for Asia to become the world’s workshop and brought consumers a previously unimaginable variety of low-cost products from around the globe.”

Whenever I read this story from Marc Levinson’s book “The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger” my blood runs cold, because of its analogy to today’s emerging software containers and their growing importance for all IT, for the whole software life cycle, each phase, from design, coding, testing, to software release, distribution, access and use, support and maintenance, and especially for the end-users and their applications.

40 Years of Expert HPC

The last 40 years saw a continuous struggle of our community with HPC. Let me tell you how I started with HPC. In 1976 started my first job as a computer scientist at the Max Planck Institute for Plasmaphysics in Munich, developing my first program for magneto-hydrodynamics plasma simulations on a 3-MFLOPS IBM 360/91. Three years later, at the German Aerospace Center (DLR) in Gottingen, I was involved in the benchmarking and acquisition of DLR’s first Cray-1S which marked my entry into vector computing. In 1980, my team broke the 50-MFLOPS with a speedup of 20 over DLR’s IBM 3081 mainframe computer, with fluid dynamics simulations for a nonlinear convective flow and for a direct Monte-Carlo simulation of the von-Karman vortex street. To get to that level of performance, however, we had to change several numerical algorithms and hand-vectorize and optimize quite a few compute-intensive subroutines of the programs which took us several troublesome months. That was HPC for experts, then.

Ubiquitous Computing – Xerox PARC’s Great Mark Weiser

When we use the word ‘ubiquitous’ in the following we mean synonyms like everywhere, omnipresent, pervasive, universal, and all-over, according to thesaurus.com. Here I’d like to quote the great Mark Weiser from Xerox PARC who wrote in 1988 already:

Mark Weiser

“Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives.”

Weiser clearly looks at ‘ubiquitous computing’ with the eyes of the end-users, engineers and scientists I mentioned above. According to Weiser these users shouldn’t care about the ‘engine’ under the hood; all they care is about ‘driving’ safely, reliably, easily; getting in the car, starting the engine, pulling out into traffic, and reaching point B; everybody should be able to do that, everywhere, any time.

Towards Ubiquitous High Performance Computing

Now translating this into ‘Ubiquitous HPC’, with Mark Weiser. Very simplified HPC technology is split into two parts: hardware and software; both today are immensely complex in themselves; and their mutual interaction is highly sophisticated. For (high performance) computing to be ubiquitous Weiser suggests making it disappear into the background of our (business) lives; note well, this is from the end user’s point of view. Indeed, in the last decade, we were able to make a big step towards reaching this goal: we abstracted the application layer from the physical architecture underneath, through server virtualization. This achievement came with great benefits for the IT folks – and for the end-users too: such as the ability to provision servers faster, enhance security, reduce hardware vendor lock-in, increase uptime, improve disaster recovery, isolate applications and extend the life of older applications, and help move things to the cloud easily. So, with server virtualization we came quite close already to ubiquitous computing.

Finally – Ubiquitous High Performance Computing – with HPC Software Containers

But, server virtualization did not really gain a foothold in HPC, especially for highly parallel applications requiring low latency and high bandwidth inter-process communication. And multi-tenant HPC servers with several VMs competing among each other for hardware resources such as I/O, memory, and network, are often slowing down application performance.

Because VMs failed to show presence in HPC, the challenges of software distribution, administration, and maintenance kept HPC systems locked up in closets, available to only a select few. There has been no way to control the application management chaos that a democratized HPC environment would result in.

. . . until in 2013 Docker Linux Containers saw the light of day. The key practical difference between Docker and VMs is that Docker is a Linux-based system that makes use of a userspace interface for the Linux kernel containment features. Another difference is that rather than being a self-contained system in its own right, a Docker container shares the Linux kernel with the operating system running the host machine. It also shares the kernel with other containers that are running on the host machine. That makes Docker containers extremely lightweight, and well suited for HPC, in principle. Still it took us at UberCloud about a year to develop – based on micro-service Docker container technology – the macro-service production-ready counterpart for HPC, plus enhancing and testing it with a dozen of applications and with engineering workflows, on about a dozen different HPC single- and multi-node cloud resources. These high performance interactive software containers, whether they be on-premise, on public or on private clouds, bring a number of core benefits to the otherwise traditional HPC environments with the goal to make HPC widely available, ubiquitous:

Packageability: Bundle applications together with libraries and configuration files:

A container image bundles the needed libraries and tools as well as the application code and the necessary configuration for these components to work together seamlessly. There is no need to install software or tools on the host compute environment, since the ready-to-run container image has all the required components. The challenges regarding library dependencies, version conflicts, configuration challenges disappear, as do the huge replication and duplication efforts in our community when it comes to deploying HPC software which is one of the major goals of the OpenHPC initiative.

Portability: Build container images once, deploy them rapidly in various infrastructures:

Having a single container image makes it easy for the workload to be rapidly deployed and moved from host to host, between development and production environments, and to other computing facilities easily. The container allows the end user to select the appropriate environment such as a public cloud, a private cloud, or an on-premise HPC cluster. There is no need to install new components or perform setup steps when using another host.

Accessibility: Bundle tools such as SSH into the container for easy access:

The container is setup to provide easy access via tools such as VNC for remote desktop sharing. In addition, containers running on computing nodes enable both end-users and administrators to have a consistent implementation regardless of the underlying compute environment.

Usability: Provide familiar user interfaces and user tools with the application:

The container has only the required components to run the application. By eliminating other tools and middleware, the work environment is simplified and the usability is improved. The ability to provide a full featured desktop increases usability (especially for pre and post processing steps) and reduces training needs. Further, the HPC containers can be used together with a resource manager such as Slurm or Grid Engine, increasing the usability even further by eliminating many administration tasks.

In addition, the lightweight nature of the HPC container suggests low performance overhead. Our own performance tests with real applications on several multi-host multi-container HPC systems demonstrate that there is no significant overhead for running high performance workloads as an HPC container.

Conclusion

During the past two years we at UberCloud have successfully built HPC containers for application software like ANSYS (Fluent, CFX, Icepak, Electromagnetics, Mechanical, LS-Dyna, DesignModeler, and Workbench), CD-adapco STAR-CCM+, COMSOL Multiphysics, NICE DCV, Numeca FINE/Marine and FINE/Turbo, OpenFOAM, PSPP, Red Cedar’s HEEDS, Scilab, Gromacs, and others. These application containers are now running on cloud resources from Advania, Amazon AWS, CPU 24/7, Microsoft Azure, Nephoscale, OzenCloud, and others.

Together with recent advances and trends in application software and in high performance hardware technologies, the advent of lightweight pervasive, packageable, portable, scalable, interactive, easy to access and use HPC application containers based on Docker technology running seamlessly on workstations, servers, and clouds, is bringing us ever closer to what Intel calls the democratization of HPC, i.e. the age of ubiquitous high performance computing where HPC “technology recedes into the background of our lives.”

More information about these software containers can be found here. Container cases studies with real applications in the cloud are available for download. And, quite useful for all software providers is the site “Building Your Own ‘Software as a Service’ Business in the Cloud.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, provided a brief but insightful portrait of Nvidia’s rese Read more…

By John Russell

ORNL Helps Identify Challenges of Extremely Heterogeneous Architectures

March 21, 2019

Exponential growth in classical computing over the last two decades has produced hardware and software that support lightning-fast processing speeds, but advancements are topping out as computing architectures reach thei Read more…

By Laurie Varma

Interview with 2019 Person to Watch Jim Keller

March 21, 2019

On the heels of Intel's reaffirmation that it will deliver the first U.S. exascale computer in 2021, which will feature the company's new Intel Xe architecture, we bring you our interview with our 2019 Person to Watch Jim Keller, head of the Silicon Engineering Group at Intel. Read more…

By HPCwire Editorial Team

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

Insurance: Where’s the Risk?

Insurers are facing extreme competitive challenges in their core businesses. Property and Casualty (P&C) and Life and Health (L&H) firms alike are highly impacted by the ongoing globalization, increasing regulation, and digital transformation of their client bases. Read more…

What’s New in HPC Research: TensorFlow, Buddy Compression, Intel Optane & More

March 20, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

GTC 2019: Chief Scientist Bill Dally Provides Glimpse into Nvidia Research Engine

March 22, 2019

Amid the frenzy of GTC this week – Nvidia’s annual conference showcasing all things GPU (and now AI) – William Dally, chief scientist and SVP of research, Read more…

By John Russell

At GTC: Nvidia Expands Scope of Its AI and Datacenter Ecosystem

March 19, 2019

In the high-stakes race to provide the AI life-cycle solution of choice, three of the biggest horses in the field are IBM, Intel and Nvidia. While the latter is only a fraction of the size of its two bigger rivals, and has been in business for only a fraction of the time, Nvidia continues to impress with an expanding array of new GPU-based hardware, software, robotics, partnerships and... Read more…

By Doug Black

Nvidia Debuts Clara AI Toolkit with Pre-Trained Models for Radiology Use

March 19, 2019

AI’s push into healthcare got a boost yesterday with Nvidia’s release of the Clara Deploy AI toolkit which includes 13 pre-trained models for use in radiolo Read more…

By John Russell

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Quick Take: Trump’s 2020 Budget Spares DoE-funded HPC but Slams NSF and NIH

March 12, 2019

U.S. President Donald Trump’s 2020 budget request, released yesterday, proposes deep cuts in many science programs but seems to spare HPC funding by the Depar Read more…

By John Russell

Nvidia Wins Mellanox Stakes for $6.9 Billion

March 11, 2019

The long-rumored acquisition of Mellanox came to fruition this morning with GPU chipmaker Nvidia’s announcement that it has purchased the high-performance net Read more…

By Doug Black

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

It’s Official: Aurora on Track to Be First US Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Move Over Lustre & Spectrum Scale – Here Comes BeeGFS?

November 26, 2018

Is BeeGFS – the parallel file system with European roots – on a path to compete with Lustre and Spectrum Scale worldwide in HPC environments? Frank Herold Read more…

By John Russell

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This