HPC Iron, Soft, Data, People – It Takes an Ecosystem!

By Alex R. Larzelere

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be carefully woven together by people to create the computational capabilities that are used to deliver insights into the behaviors of complex systems. This collection of technologies and people has been called the High Performance Computing (HPC) ecosystem. This is an appropriate metaphor because it evokes the complicated nature of the interdependent elements needed to deliver first of a kind computing systems.

The idea of the HPC ecosystem has been around for years and most recently appeared in one of the objectives for the National Strategic Computing Initiative (NSCI). The 4th objective calls for “Increasing the capacity and capability of an enduring national HPC ecosystem.” This leads to the questions of, “what makes up the HPC ecosystem” and why is it so important? Perhaps the more important question is, why does the United States need to be careful about letting its HPC ecosystem diminish?

The heart of the HPC ecosystem is clearly the “big humming boxes” that contain the advanced computing hardware. The rows upon rows of cabinets are the focal point of the electronic components, operating software, and application programs that provide the capabilities that produce the results used to create new scientific and engineering insights that are the real purpose of the HPC ecosystem. However, it is misleading to think that any one computer at any one time is sufficient to make up an ecosystem. Rather, the HPC ecosystem requires a continuous pipeline of computer hardware and software. It is that continuous flow of developing technologies that keeps HPC progressing on the cutting edge.

The hardware element of the pipeline includes systems and components that are under development, but are not currently available. This includes the basic research that will create the scientific discoveries that enable new approaches to computer designs. The ongoing demand for “cutting edge” systems is important to keep system and component designers pushing the performance envelope. The pipeline also includes the currently installed highest performance systems. These are the systems that are being tested and optimized. Every time a system like this is installed, technology surprises are found that must be identified and accommodated. The hardware pipeline also includes systems on the trailing edge. At this point, the computer hardware is quite stable and allows a focus on developing and optimizing modeling and simulation applications.

One of the greatest challenges of maintaining the HPC ecosystem is recognizing that there are significant financial commitments needed to keep the pipeline filled. There are many examples of organizations that believed that buying a single big computer would make them part of the ecosystem. In those cases, they were right, but only temporarily. Being part of the HPC ecosystem requires being committed to buying the next cutting-edge system based on the lessons learned from the last system.

Another critical element of the HPC ecosystem is software. This generally falls into two categories – software needed to operate the computer (also called middleware or the “stack”) and software that provides insights into end user questions (called applications). Middleware plays the critical role of managing the operations of the hardware systems and enabling the execution of applications software. Middleware includes computer operating systems, file systems and network controllers. This type of software also includes compilers that translate application programs into the machine language that will be executed on hardware. There are quite a number of other pieces of middleware software that include libraries of commonly needed functions, programming tools, performance monitors, and debuggers.

Applications software span a wide range and are as varied as the problems users want to address through computation. Some applications are quick “throwaway” (prototype) attempts to explore potential ways in which computers may be used to address a problem. Other applications software is written, sometimes with different solution methods, to simulate physical behaviors of complex systems. This software will sometimes last for decades and will be progressively improved. An important aspect of these types of applications is the experimental validation data that provide confidence that the results can be trusted. For this type of applications software, setting up the problem that can include finite element mesh generation, populating that mesh with material properties and launching the execution are important parts of the ecosystem. Other elements of usability of application software include the computers, software, and displays that allow users to visualize and explore simulation results.

Data is yet another essential element of the HPC ecosystem. Data is the lifeblood in the circulatory system that flows through the system to keep it doing useful things. The HPC ecosystem includes systems that hold and move data from one element to another. Hardware aspects of the data system include memory, storage devices, and networking. Also software device drivers and file systems are needed to keep track of the data. With the growing trend to add machine learning and artificial intelligence to the HPC ecosystem, its ability to process and productively use data are becoming increasingly significant.

Finally, and most importantly, trained and highly skilled people are an essential part of the HPC ecosystem. Just like computing systems, these people make up a “pipeline” that starts in elementary school and continues through undergraduate and then advanced degrees. Attracting and educating these people in computing technologies is critical. Another important part of the people pipeline of the HPC ecosystem are the jobs offered by academia, national labs, government, and industry. These professional experiences provide the opportunities needed to practice and hone HPC skills.

The origins of the United States’ HPC ecosystem dates back to the decision by the U.S. Army Research Lab to procure an electronic computer to calculate ballistic tables for its artillery during World War II (i.e. ENIAC). That event led to finding and training the people, who in many cases were women, to program and operate the computer. The ENIAC was just the start of the nation’s significant investment in hardware, middleware software, and applications. However, just because the United States was the first does not mean that it was alone. Europe and Japan also have robust HPC ecosystems for years and most recently China has determinedly set out to create one of their own.

The United States and other countries made the necessary investments in their HPC ecosystems because they understood the strategic advantages that staying at the cutting edge of computing provides. These well-document advantages apply to many areas that include: national security, discovery science, economic competitiveness, energy security and curing diseases.

The challenge of maintaining the HPC ecosystem is that, just like a natural ecosystem, the HPC version can be threatened by becoming too narrow and lacking diversity. This applies to the hardware, middleware, and applications software. Betting on just a few types of technologies can be disastrous if one approach fails. Diversity also means having and using a healthy range of systems that covers the highest performance cutting edge systems to wide deployment of mid and low-end production systems. Another aspect of diversity is the range of applications that can productively use on advanced computing resources.

Perhaps the greatest challenge to an ecosystem is complacency and assuming that it, and the necessary people, will always be there. This can take the form of an attitude that it is good enough to become a HPC technology follower and acceptable to purchase HPC systems and services from other nations. Once a HPC ecosystem has been lost, it is not clear if it can be regained. Having a robust HPC ecosystem can last for decades, through many “half lives” of hardware. A healthy ecosystem allows puts countries in a leadership position and this means the ability to influence HPC technologies in ways that best serve their strategic goals. Happily, the 4th NSCI objective signals that the United States understands these challenges and the importance of maintaining a healthy HPC ecosystem.

About the Author

Alex Larzelere is a senior fellow at the U.S. Council on Competitiveness, the president of Larzelere & Associates Consulting and HPCwire’s policy editor. He is currently a technologist, speaker and author on a number of disruptive technologies that include: advanced modeling and simulation; high performance computing; artificial intelligence; the Internet of Things; and additive manufacturing. Alex’s career has included time in federal service (working closely with DOE national labs), private industry, and as founder of a small business. Throughout that time, he led programs that implemented the use of cutting edge advanced computing technologies to enable high resolution, multi-physics simulations of complex physical systems. Alex is the author of “Delivering Insight: The History of the Accelerated Strategic Computing Initiative (ASCI).”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Pfizer HPC Engineer Aims to Automate Software Stack Testing

January 17, 2019

Seeking to reign in the tediousness of manual software testing, Pfizer HPC Engineer Shahzeb Siddiqui is developing an open source software tool called buildtest, aimed at automating software stack testing by providing the community with a central repository of tests for common HPC apps and the ability to automate execution of testing. Read more…

By Tiffany Trader

Senegal Prepares to Take Delivery of Atos Supercomputer

January 16, 2019

In just a few months time, Senegal will be operating the second largest HPC system in sub-Saharan Africa. The Minister of Higher Education, Research and Innovation Mary Teuw Niane made the announcement on Monday (Jan. 14 Read more…

By Tiffany Trader

Google Cloud Platform Extends GPU Instance Options

January 16, 2019

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three big public cloud vendors has by turn touted the latest and Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE Systems With Intel Omni-Path: Architected for Value and Accessible High-Performance Computing

Today’s high-performance computing (HPC) and artificial intelligence (AI) users value high performing clusters. And the higher the performance that their system can deliver, the better. Read more…

IBM Accelerated Insights

Resource Management in the Age of Artificial Intelligence

New challenges demand fresh approaches

Fueled by GPUs, big data, and rapid advances in software, the AI revolution is upon us. Read more…

STAC Floats ML Benchmark for Financial Services Workloads

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchmark or suite of benchmarking tools to compare the performanc Read more…

By John Russell

Google Cloud Platform Extends GPU Instance Options

January 16, 2019

If it's Nvidia GPUs you're after to power your AI/HPC/visualization workload, Google Cloud has them, now claiming "broadest GPU availability." Each of the three Read more…

By Tiffany Trader

STAC Floats ML Benchmark for Financial Services Workloads

January 16, 2019

STAC (Securities Technology Analysis Center) recently released an ‘exploratory’ benchmark for machine learning which it hopes will evolve into a firm benchm Read more…

By John Russell

A Big Data Journey While Seeking to Catalog our Universe

January 16, 2019

It turns out, astronomers have lots of photos of the sky but seek knowledge about what the photos mean. Sound familiar? Big data problems are often characterize Read more…

By James Reinders

Intel Bets Big on 2-Track Quantum Strategy

January 15, 2019

Quantum computing has lived so long in the future it’s taken on a futuristic life of its own, with a Gartner-style hype cycle that includes triggers of innovation, inflated expectations and – though a useful quantum system is still years away – anticipatory troughs of disillusionment. Read more…

By Doug Black

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM’s New Global Weather Forecasting System Runs on GPUs

January 9, 2019

Anyone who has checked a forecast to decide whether or not to pack an umbrella knows that weather prediction can be a mercurial endeavor. It is a Herculean task: the constant modeling of incredibly complex systems to a high degree of accuracy at a local level within very short spans of time. Read more…

By Oliver Peckham

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

Cray Unveils Shasta, Lands NERSC-9 Contract

October 30, 2018

Cray revealed today the details of its next-gen supercomputing architecture, Shasta, selected to be the next flagship system at NERSC. We've known of the code-name "Shasta" since the Argonne slice of the CORAL project was announced in 2015 and although the details of that plan have changed considerably, Cray didn't slow down its timeline for Shasta. Read more…

By Tiffany Trader

AMD Sets Up for Epyc Epoch

November 16, 2018

It’s been a good two weeks, AMD’s Gary Silcott and Andy Parma told me on the last day of SC18 in Dallas at the restaurant where we met to discuss their show news and recent successes. Heck, it’s been a good year. Read more…

By Tiffany Trader

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

US Leads Supercomputing with #1, #2 Systems & Petascale Arm

November 12, 2018

The 31st Supercomputing Conference (SC) - commemorating 30 years since the first Supercomputing in 1988 - kicked off in Dallas yesterday, taking over the Kay Ba Read more…

By Tiffany Trader

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Summit Supercomputer is Already Making its Mark on Science

September 20, 2018

Summit, now the fastest supercomputer in the world, is quickly making its mark in science – five of the six finalists just announced for the prestigious 2018 Read more…

By John Russell

Nvidia’s Jensen Huang Delivers Vision for the New HPC

November 14, 2018

For nearly two hours on Monday at SC18, Jensen Huang, CEO of Nvidia, presented his expansive view of the future of HPC (and computing in general) as only he can do. Animated. Backstopped by a stream of data charts, product photos, and even a beautiful image of supernovae... Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

HPE No. 1, IBM Surges, in ‘Bucking Bronco’ High Performance Server Market

September 27, 2018

Riding healthy U.S. and global economies, strong demand for AI-capable hardware and other tailwind trends, the high performance computing server market jumped 28 percent in the second quarter 2018 to $3.7 billion, up from $2.9 billion for the same period last year, according to industry analyst firm Hyperion Research. Read more…

By Doug Black

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Intel Confirms 48-Core Cascade Lake-AP for 2019

November 4, 2018

As part of the run-up to SC18, taking place in Dallas next week (Nov. 11-16), Intel is doling out info on its next-gen Cascade Lake family of Xeon processors, specifically the “Advanced Processor” version (Cascade Lake-AP), architected for high-performance computing, artificial intelligence and infrastructure-as-a-service workloads. Read more…

By Tiffany Trader

Germany Celebrates Launch of Two Fastest Supercomputers

September 26, 2018

The new high-performance computer SuperMUC-NG at the Leibniz Supercomputing Center (LRZ) in Garching is the fastest computer in Germany and one of the fastest i Read more…

By Tiffany Trader

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Houston to Field Massive, ‘Geophysically Configured’ Cloud Supercomputer

October 11, 2018

Based on some news stories out today, one might get the impression that the next system to crack number one on the Top500 would be an industrial oil and gas mon Read more…

By Tiffany Trader

Microsoft to Buy Mellanox?

December 20, 2018

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This