HPC Iron, Soft, Data, People – It Takes an Ecosystem!

By Alex R. Larzelere

December 11, 2017

Cutting edge advanced computing hardware (aka big iron) does not stand by itself. These computers are the pinnacle of a myriad of technologies that must be carefully woven together by people to create the computational capabilities that are used to deliver insights into the behaviors of complex systems. This collection of technologies and people has been called the High Performance Computing (HPC) ecosystem. This is an appropriate metaphor because it evokes the complicated nature of the interdependent elements needed to deliver first of a kind computing systems.

The idea of the HPC ecosystem has been around for years and most recently appeared in one of the objectives for the National Strategic Computing Initiative (NSCI). The 4th objective calls for “Increasing the capacity and capability of an enduring national HPC ecosystem.” This leads to the questions of, “what makes up the HPC ecosystem” and why is it so important? Perhaps the more important question is, why does the United States need to be careful about letting its HPC ecosystem diminish?

The heart of the HPC ecosystem is clearly the “big humming boxes” that contain the advanced computing hardware. The rows upon rows of cabinets are the focal point of the electronic components, operating software, and application programs that provide the capabilities that produce the results used to create new scientific and engineering insights that are the real purpose of the HPC ecosystem. However, it is misleading to think that any one computer at any one time is sufficient to make up an ecosystem. Rather, the HPC ecosystem requires a continuous pipeline of computer hardware and software. It is that continuous flow of developing technologies that keeps HPC progressing on the cutting edge.

The hardware element of the pipeline includes systems and components that are under development, but are not currently available. This includes the basic research that will create the scientific discoveries that enable new approaches to computer designs. The ongoing demand for “cutting edge” systems is important to keep system and component designers pushing the performance envelope. The pipeline also includes the currently installed highest performance systems. These are the systems that are being tested and optimized. Every time a system like this is installed, technology surprises are found that must be identified and accommodated. The hardware pipeline also includes systems on the trailing edge. At this point, the computer hardware is quite stable and allows a focus on developing and optimizing modeling and simulation applications.

One of the greatest challenges of maintaining the HPC ecosystem is recognizing that there are significant financial commitments needed to keep the pipeline filled. There are many examples of organizations that believed that buying a single big computer would make them part of the ecosystem. In those cases, they were right, but only temporarily. Being part of the HPC ecosystem requires being committed to buying the next cutting-edge system based on the lessons learned from the last system.

Another critical element of the HPC ecosystem is software. This generally falls into two categories – software needed to operate the computer (also called middleware or the “stack”) and software that provides insights into end user questions (called applications). Middleware plays the critical role of managing the operations of the hardware systems and enabling the execution of applications software. Middleware includes computer operating systems, file systems and network controllers. This type of software also includes compilers that translate application programs into the machine language that will be executed on hardware. There are quite a number of other pieces of middleware software that include libraries of commonly needed functions, programming tools, performance monitors, and debuggers.

Applications software span a wide range and are as varied as the problems users want to address through computation. Some applications are quick “throwaway” (prototype) attempts to explore potential ways in which computers may be used to address a problem. Other applications software is written, sometimes with different solution methods, to simulate physical behaviors of complex systems. This software will sometimes last for decades and will be progressively improved. An important aspect of these types of applications is the experimental validation data that provide confidence that the results can be trusted. For this type of applications software, setting up the problem that can include finite element mesh generation, populating that mesh with material properties and launching the execution are important parts of the ecosystem. Other elements of usability of application software include the computers, software, and displays that allow users to visualize and explore simulation results.

Data is yet another essential element of the HPC ecosystem. Data is the lifeblood in the circulatory system that flows through the system to keep it doing useful things. The HPC ecosystem includes systems that hold and move data from one element to another. Hardware aspects of the data system include memory, storage devices, and networking. Also software device drivers and file systems are needed to keep track of the data. With the growing trend to add machine learning and artificial intelligence to the HPC ecosystem, its ability to process and productively use data are becoming increasingly significant.

Finally, and most importantly, trained and highly skilled people are an essential part of the HPC ecosystem. Just like computing systems, these people make up a “pipeline” that starts in elementary school and continues through undergraduate and then advanced degrees. Attracting and educating these people in computing technologies is critical. Another important part of the people pipeline of the HPC ecosystem are the jobs offered by academia, national labs, government, and industry. These professional experiences provide the opportunities needed to practice and hone HPC skills.

The origins of the United States’ HPC ecosystem dates back to the decision by the U.S. Army Research Lab to procure an electronic computer to calculate ballistic tables for its artillery during World War II (i.e. ENIAC). That event led to finding and training the people, who in many cases were women, to program and operate the computer. The ENIAC was just the start of the nation’s significant investment in hardware, middleware software, and applications. However, just because the United States was the first does not mean that it was alone. Europe and Japan also have robust HPC ecosystems for years and most recently China has determinedly set out to create one of their own.

The United States and other countries made the necessary investments in their HPC ecosystems because they understood the strategic advantages that staying at the cutting edge of computing provides. These well-document advantages apply to many areas that include: national security, discovery science, economic competitiveness, energy security and curing diseases.

The challenge of maintaining the HPC ecosystem is that, just like a natural ecosystem, the HPC version can be threatened by becoming too narrow and lacking diversity. This applies to the hardware, middleware, and applications software. Betting on just a few types of technologies can be disastrous if one approach fails. Diversity also means having and using a healthy range of systems that covers the highest performance cutting edge systems to wide deployment of mid and low-end production systems. Another aspect of diversity is the range of applications that can productively use on advanced computing resources.

Perhaps the greatest challenge to an ecosystem is complacency and assuming that it, and the necessary people, will always be there. This can take the form of an attitude that it is good enough to become a HPC technology follower and acceptable to purchase HPC systems and services from other nations. Once a HPC ecosystem has been lost, it is not clear if it can be regained. Having a robust HPC ecosystem can last for decades, through many “half lives” of hardware. A healthy ecosystem allows puts countries in a leadership position and this means the ability to influence HPC technologies in ways that best serve their strategic goals. Happily, the 4th NSCI objective signals that the United States understands these challenges and the importance of maintaining a healthy HPC ecosystem.

About the Author

Alex Larzelere is a senior fellow at the U.S. Council on Competitiveness, the president of Larzelere & Associates Consulting and HPCwire’s policy editor. He is currently a technologist, speaker and author on a number of disruptive technologies that include: advanced modeling and simulation; high performance computing; artificial intelligence; the Internet of Things; and additive manufacturing. Alex’s career has included time in federal service (working closely with DOE national labs), private industry, and as founder of a small business. Throughout that time, he led programs that implemented the use of cutting edge advanced computing technologies to enable high resolution, multi-physics simulations of complex physical systems. Alex is the author of “Delivering Insight: The History of the Accelerated Strategic Computing Initiative (ASCI).”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire