HPC: Still Looking for Love from Manufacturers

By Michael Feldman

March 28, 2012

One of the prominent themes of this week’s High Performance Computer and Communications Council (HPCC) Conference revolved around the question of why  many users with a need for HPC are still resistant to adopting the technology. John West, the Director of the DoD’s High Performance Computing Modernization Program, and the organizer of this years HPCC program, talked at length about this particular phenomenon in his conference kickoff presentation on Monday morning, titled “What’s Missing From HPC?”

There are plenty of drivers for bringing more users into the HPC fold, from the practical motivations of hardware and software vendors, who would like to move more product, to the more altruistic interests of the HPC’ers, who want to expand the community, and the government, who sees the technology as a way to improve industry competitiveness and create jobs.

The problem has been coined with the term “Missing Middle,” referring to the absence of HPC users between the topmost supercomputing practitioners at the national labs and those doing technical computing via MATLAB and CAE/CAD tools on personal computers and workstations. Many of these missing users are in the manufacturing sector, but they also inhabit more established HPC enclaves such as defense, life sciences and finance.

All things being equal, one would expect there to be a continuum of HPC practitioners from the bottom to the top, with a pyramidal distribution that reflected application level and complexity. But that’s not the case. While there are millions of people doing technical computing on the desktop and perhaps tens of thousands of supercomputing users at the top, the middle ground has a lot more in common with supercomputing group population-wise.

For these types of users, system size is in the “closet cluster” realm, on up to maybe a few racks of servers. In fact, this represents the average size of HPC systems for people who are not doing “big science”-type supercomputing. In that sense, the middle is not so much missing, as grossly underpopulated.

According to West, most people using supercomputing today came to the technology because they didn’t have of choice. Astrophysicists couldn’t create two galaxies in a lab and watch them collide; they had to simulate the whole thing digitally. Since supercomputing practitioners are more or less a captive audience, in many cases the tools that are available are not all that great. They often rely on specialized compilers and development environments, legacy programming languages, command line interfaces, and obscure Linux commands. Meanwhile, the larger computing community has moved on to pretty GUIs and a rich ecosystem of more intuitive tools.

That by itself has made the jump from desktop computing to clusters a painful one. But as West mentioned later, there are a number of new interfaces being developed (usually specialized for individual applications or application domains) that are much more user friendly.

Another barrier to moving up the computing food chain is expensive hardware and software. “We’re mostly over this one,” West noted. “It’s not so expensive anymore, although if you’re talking about small manufacturers or small businesses, $50,000 is still real money.”

Then there’s the management of the cluster. If you don’t have an IT admin in your organization or if you do have one, but they are used to managing only Windows PCs, then the decision to add an HPC system is a lot more difficult. The choices (ignoring the cloud option) are to either hire a cluster administration or convince IT that they have to come up to speed on the technology.

Compounding that problem is the lack of a complete tool chain — the various codes, libraries and development tools that are needed to create the models and other user applications. Since these are often missing even at the high-end of HPC, their absence for entry-level users should come as no particular surprise. The solution here, said West, is non-trivial, and comes down to filling in those software gaps on a case-by-case basis.

One barrier that is not discussed as much is the lack of expertise and social support for HPC systems. For a workplace with no previous experience using the technology, the initial user is often the loneliest guy or gal in the building, with no one to ask questions of when something goes wrong. “This is a skills problem, at its heart,” West said, adding that what is needed is a lot more people in industry who are at least computational literate and then a smaller number of computational professionals.

Related to the cultural and technical unfamiliarity with high performance computing is the fact that most non-HPC users already have something that works today. It might not be the fastest or slickest solution, but it serves its purpose. A typical desktop workflow might mean starting up a job on a PC before going home for the evening, and then getting the results back the following morning. If that doesn’t sound like an optimal workflow, at least it’s comfortable one.

The opportunity for HPC arises when the pace of desktop computation isn’t fast enough, either because it’s limiting product innovation, it’s causing deadlines to be missed, or both. It’s been estimated that maybe half the 280,000 or so US manufacturers fall into that category. And given that only 4 to 8 percent of those manufacturers currently employ HPC, the opportunity does indeed appear to loom large.

Of course, the underlying assumption here is that Moore’s Law is not sufficient for technical computing at any level. In other words, desktop systems that are regularly replaced with ones based on faster chips would not be powerful enough to keep up with an escalating demand for better application fidelity or more complex computations. While it’s true that desktop machines of today have as much computational power as the top supercomputers of 15 years ago, that’s still too slow for traditional supercomputing applications. To escape the more limited progression of Moore’s Law, HPC has turned to multiplying those processors across ever-larger clusters. But is Moore’s Law too slow for a typical CAE/CAD user?

Since the cluster is the lens through which HPC practitioners look at computing problems, it’s no surprise they believe the technology is appropriate for most, if not all, technical computing problems. In his conference presentation, West acknowledged that mindset, pointing out that people in this community tend to view HPC as a “unalloyed good,” which can be applied to good effect nearly everywhere. “I think that’s not always helpful,” admitted West.

Intersect360 Research CEO Addison Snell, who has been following the HPC-manufacturing gap for the past couple of years, remarked that not every company is going to need the technology. According to him, the easiest converts will be those manufacturers who need to create innovative products, rather than just standard widgets that fit into a supply chain.

At the conference this week, their were three examples of such companies that made a successful leap to HPC: Simpson Strong Tie, which employs high fidelity FEA models for its structural engineering designs; Accio Energy, a wind energy start-up that is using HPC to design electrohydrodynamic (EHD) wind energy technology (no moving parts); and Intelligent Light, a software company that used its CFD software to help design a game-changing bicycle racing wheel for manufacturer Zipp Speed Weaponry. In all cases, these fit into the high-innovation-need category, where the engineering, by necessity, required a lot of design iterations.

Intel’s Bill Feiereisen got the last word at the conference with his HPC in Manufacturing presentation on Wednesday afternoon. He brought up the idea of creating a pilot project that offers a template for entry-level users interested in make the jump to HPC. He also saw outreach and education as ways of getting the HPC message out and creating a critical mass of qualified practitioners.

Ultimately though, Feiereisen believes that high performance computing has to become accessible enough to be a “pull” rather than a “push” technology. Obviously, there’s no magic bullet for that, but at least there seems to be pretty solid consensus in the community now that they need to find some new ways to connect the technology dots.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire