HPC: Still Looking for Love from Manufacturers

By Michael Feldman

March 28, 2012

One of the prominent themes of this week’s High Performance Computer and Communications Council (HPCC) Conference revolved around the question of why  many users with a need for HPC are still resistant to adopting the technology. John West, the Director of the DoD’s High Performance Computing Modernization Program, and the organizer of this years HPCC program, talked at length about this particular phenomenon in his conference kickoff presentation on Monday morning, titled “What’s Missing From HPC?”

There are plenty of drivers for bringing more users into the HPC fold, from the practical motivations of hardware and software vendors, who would like to move more product, to the more altruistic interests of the HPC’ers, who want to expand the community, and the government, who sees the technology as a way to improve industry competitiveness and create jobs.

The problem has been coined with the term “Missing Middle,” referring to the absence of HPC users between the topmost supercomputing practitioners at the national labs and those doing technical computing via MATLAB and CAE/CAD tools on personal computers and workstations. Many of these missing users are in the manufacturing sector, but they also inhabit more established HPC enclaves such as defense, life sciences and finance.

All things being equal, one would expect there to be a continuum of HPC practitioners from the bottom to the top, with a pyramidal distribution that reflected application level and complexity. But that’s not the case. While there are millions of people doing technical computing on the desktop and perhaps tens of thousands of supercomputing users at the top, the middle ground has a lot more in common with supercomputing group population-wise.

For these types of users, system size is in the “closet cluster” realm, on up to maybe a few racks of servers. In fact, this represents the average size of HPC systems for people who are not doing “big science”-type supercomputing. In that sense, the middle is not so much missing, as grossly underpopulated.

According to West, most people using supercomputing today came to the technology because they didn’t have of choice. Astrophysicists couldn’t create two galaxies in a lab and watch them collide; they had to simulate the whole thing digitally. Since supercomputing practitioners are more or less a captive audience, in many cases the tools that are available are not all that great. They often rely on specialized compilers and development environments, legacy programming languages, command line interfaces, and obscure Linux commands. Meanwhile, the larger computing community has moved on to pretty GUIs and a rich ecosystem of more intuitive tools.

That by itself has made the jump from desktop computing to clusters a painful one. But as West mentioned later, there are a number of new interfaces being developed (usually specialized for individual applications or application domains) that are much more user friendly.

Another barrier to moving up the computing food chain is expensive hardware and software. “We’re mostly over this one,” West noted. “It’s not so expensive anymore, although if you’re talking about small manufacturers or small businesses, $50,000 is still real money.”

Then there’s the management of the cluster. If you don’t have an IT admin in your organization or if you do have one, but they are used to managing only Windows PCs, then the decision to add an HPC system is a lot more difficult. The choices (ignoring the cloud option) are to either hire a cluster administration or convince IT that they have to come up to speed on the technology.

Compounding that problem is the lack of a complete tool chain — the various codes, libraries and development tools that are needed to create the models and other user applications. Since these are often missing even at the high-end of HPC, their absence for entry-level users should come as no particular surprise. The solution here, said West, is non-trivial, and comes down to filling in those software gaps on a case-by-case basis.

One barrier that is not discussed as much is the lack of expertise and social support for HPC systems. For a workplace with no previous experience using the technology, the initial user is often the loneliest guy or gal in the building, with no one to ask questions of when something goes wrong. “This is a skills problem, at its heart,” West said, adding that what is needed is a lot more people in industry who are at least computational literate and then a smaller number of computational professionals.

Related to the cultural and technical unfamiliarity with high performance computing is the fact that most non-HPC users already have something that works today. It might not be the fastest or slickest solution, but it serves its purpose. A typical desktop workflow might mean starting up a job on a PC before going home for the evening, and then getting the results back the following morning. If that doesn’t sound like an optimal workflow, at least it’s comfortable one.

The opportunity for HPC arises when the pace of desktop computation isn’t fast enough, either because it’s limiting product innovation, it’s causing deadlines to be missed, or both. It’s been estimated that maybe half the 280,000 or so US manufacturers fall into that category. And given that only 4 to 8 percent of those manufacturers currently employ HPC, the opportunity does indeed appear to loom large.

Of course, the underlying assumption here is that Moore’s Law is not sufficient for technical computing at any level. In other words, desktop systems that are regularly replaced with ones based on faster chips would not be powerful enough to keep up with an escalating demand for better application fidelity or more complex computations. While it’s true that desktop machines of today have as much computational power as the top supercomputers of 15 years ago, that’s still too slow for traditional supercomputing applications. To escape the more limited progression of Moore’s Law, HPC has turned to multiplying those processors across ever-larger clusters. But is Moore’s Law too slow for a typical CAE/CAD user?

Since the cluster is the lens through which HPC practitioners look at computing problems, it’s no surprise they believe the technology is appropriate for most, if not all, technical computing problems. In his conference presentation, West acknowledged that mindset, pointing out that people in this community tend to view HPC as a “unalloyed good,” which can be applied to good effect nearly everywhere. “I think that’s not always helpful,” admitted West.

Intersect360 Research CEO Addison Snell, who has been following the HPC-manufacturing gap for the past couple of years, remarked that not every company is going to need the technology. According to him, the easiest converts will be those manufacturers who need to create innovative products, rather than just standard widgets that fit into a supply chain.

At the conference this week, their were three examples of such companies that made a successful leap to HPC: Simpson Strong Tie, which employs high fidelity FEA models for its structural engineering designs; Accio Energy, a wind energy start-up that is using HPC to design electrohydrodynamic (EHD) wind energy technology (no moving parts); and Intelligent Light, a software company that used its CFD software to help design a game-changing bicycle racing wheel for manufacturer Zipp Speed Weaponry. In all cases, these fit into the high-innovation-need category, where the engineering, by necessity, required a lot of design iterations.

Intel’s Bill Feiereisen got the last word at the conference with his HPC in Manufacturing presentation on Wednesday afternoon. He brought up the idea of creating a pilot project that offers a template for entry-level users interested in make the jump to HPC. He also saw outreach and education as ways of getting the HPC message out and creating a critical mass of qualified practitioners.

Ultimately though, Feiereisen believes that high performance computing has to become accessible enough to be a “pull” rather than a “push” technology. Obviously, there’s no magic bullet for that, but at least there seems to be pretty solid consensus in the community now that they need to find some new ways to connect the technology dots.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together about 30 participants from industry, government and academia t Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Researchers Scale COSMO Climate Code to 4888 GPUs on Piz Daint

October 17, 2017

Effective global climate simulation, sorely needed to anticipate and cope with global warming, has long been computationally challenging. Two of the major obstacles are the needed resolution and prolonged time to compute Read more…

By John Russell

HPE Extreme Performance Solutions

Transforming Genomic Analytics with HPC-Accelerated Insights

Advancements in the field of genomics are revolutionizing our understanding of human biology, rapidly accelerating the discovery and treatment of genetic diseases, and dramatically improving human health. Read more…

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Cluster Competition coverage has come to its natural home: H Read more…

By Dan Olds

Data Vortex Users Contemplate the Future of Supercomputing

October 19, 2017

Last month (Sept. 11-12), HPC networking company Data Vortex held its inaugural users group at Pacific Northwest National Laboratory (PNNL) bringing together ab Read more…

By Tiffany Trader

AI Self-Training Goes Forward at Google DeepMind

October 19, 2017

DeepMind, Google’s AI research organization, announced today in a blog that AlphaGo Zero, the latest evolution of AlphaGo (the first computer program to defeat a Go world champion) trained itself within three days to play Go at a superhuman level (i.e., better than any human) – and to beat the old version of AlphaGo – without leveraging human expertise, data or training. Read more…

By Doug Black

Student Cluster Competition Coverage New Home

October 16, 2017

Hello computer sports fans! This is the first of many (many!) articles covering the world-wide phenomenon of Student Cluster Competitions. Finally, the Student Read more…

By Dan Olds

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Share This