HPC4Mfg Advances State-of-the-Art for American Manufacturing

By Tiffany Trader

March 9, 2017

Last Friday (March 3, 2017), the High Performance Computing for Manufacturing (HPC4Mfg) program held an industry engagement day workshop in San Diego, bringing together members of the US manufacturing community, national laboratories and universities to discuss the role of high-performance computing as an innovation engine for American manufacturing.

Keynote speaker Thomas Lange, 36-year veteran of Procter & Gamble (P&G), the manufacturing company well-known in HPC circles for their Pringles success story, engaged the room with a dynamic recounting of the history of manufacturing in the United States. Lange, an industry consultant since leaving P&G in 2015, emphasized the importance of infrastructure and logistics to the rise of American manufacturing. Throughout the last two centuries, he noted, manufacturing success was tied first to waterways (P&G), then to railroads (Sears), to the interstate-highway network (Walmart), and moving into the present day, the Internet (Amazon).

Tom Lange

“Manufacturers have to innovate how we do our thing or we will diminish,” said Lange. “It’s that simple. It’s not just about regulations and cheap labor off-shore; it’s about innovating how we do what we do, not just what we make. And it turns out innovating manufacturing at scale is too expensive to just try it and see what happens. That is the issue; it’s too big; it’s too expensive to mess with.”

The HPC4Mfg program was launched by the Department of Energy in 2015 to directly facilitate this innovation by infusing advanced computing expertise and technology into the US manufacturing industry, where it “shortens development time, guides designs, optimizes processes, prequalifies parts, reduces testing, reduces energy intensity, minimizes green house gas emissions, and ultimately improves economic competitiveness,” according to HPC4Mfg program management. Advancing innovative clean energy technologies and reducing energy and resource consumption are core elements of the program.

Lori Diachin, HPC4Mfg Director

“The HPC4Mfg program has really been designed for high-performance computing and [demonstrating] the benefits to industry,” said HPC4Mfg Director Lori Diachin. “You see a lot of ways that it’s impacting industry in the projects we have now, and these impacts range from accelerating innovation, facilitating new product design, and upscaling technologies that have been demonstrated in the laboratory or at a small scale.”

HPC4Mfg began with five seedling projects and has since implemented three solicitation rounds. (Awardees for the third round are due to be announced very shortly). It is now executing a $8.5-9 million portfolio at Lawrence Livermore, Lawrence Berkeley, and Oak Ridge National Laboratories (the managing partner laboratories for the program). The program is in the process of expanding across the DOE national lab space to include access to computers and expertise at other participating laboratories.

Currently, there are 27 demonstration projects (either in-progress, getting started or going through the CRADA process) and one, larger capability project with Purdue Calumet and US Steel (to develop the “The Virtual Blast Furnace”). The projects get access to the top supercomputers in the country: Titan at Oak Ridge, Cori at Berkeley, Vulcan at Livermore, Peregrine at NREL, and soon Mira at Argonne National Lab.

HPC4Mfg is sponsored by the DOE’s Advanced Manufacturing Office (AMO), which is part of the Office of Energy Efficiency and Renewable Energy. The AMO’s mission is to “partner with industry, small business, universities, and other stakeholders to identify and invest in emerging technologies with the potential to create high-quality domestic manufacturing jobs and enhance the global competitiveness of the United States.”

HPC4Mfg proposal submissions by industrial sector (Source: HPC4Mfg)

High-impact manufacturing areas, such as the aerospace industry, automotive, machinery, chemical processing, and the steel industry, are all represented in the participant pool.

“We aim to lower the barriers, lower the amount of risk that industrial companies have in experimenting with high performance computing in the context of their applications,” said Diachin of the program’s vision and goals. “From our perspective, the status of the industry is that some large companies have a lot of access to HPC. They’re very sophisticated in how they use it. On the flip side, very few small-to-medium-sized companies really have the in-house expertise or the access to compute resources that they need to even try out high performance computing in the context of their problems.

“On the DOE side, we do have a lot of expertise and we have very large-scale computers and so we’re able to bring to bear some of those technologies in a large array of different problems, but I think it’s a challenge – and I’ve heard this many times – for industry to understand how do they get access to the expertise that’s in the DOE labs. What is that expertise? Where does it live? They can’t really track everything that’s going on in all the national labs that the DOE has. And so this program is really designed to help reduce those barriers and create that marriage between industry-interesting challenges and problems and HPC resources at the laboratory.”

In terms of disciplines, computational fluid dynamics is a very widely needed expertise, also materials modeling and thermomechanical type modeling, but there are a wide variety, according to Diachin.

From Concept to Project: Airplanes, Lightbulbs, and Paper Towels

After submitting a concept paper, followed by a full proposal, successful projects receive about $300,000 from the AMO to fund the laboratory participation in the project. The industrial partners are required to provide at least a 20 percent match to the AMO funding. This is usually in the form of “in kind time and effort” but industrial partners can also provide a cash contribution.

Diachin emphasized that concept papers need not identify a particular lab or PI as collaborator, explaining, “You just need to tell us what your problem is and describe it in a way that we understand what simulation capabilities are needed and what’s the impact that you envision being able to achieve if you’re successful in this demonstration project.  The technical merit review team will evaluate each concepts paper for relevance as a high performance computing challenge, appropriateness for partnership with the national laboratories, and its ability to have national scale impact and be successful.  And if you haven’t identified a principal investigator at the national lab, we’ll identify the right place and team from the DOE lab complex to get this work done; this matching process is really a unique feature of the program.“

For a given round, the program typically receives about 40 concept papers from which the program office selects about 20 to go forward to full proposal state. From that they select around 10 to be fully-funded. The proposals are evaluated on how well they advance the state-of-the-art for the manufacturing sector, the technical feasibility of the project, the impact to energy savings and or clean energy production, relevance to HPC, and the strength and balance of the team.

“We are really looking for a strong partnership between the DOE lab and the company,” Diachin told HPCwire. “We’re looking for evidence that there were in-depth discussions as part of the proposal writing process and that there’s a good match in terms of the team.”

Building community and workforce is another important goal here, and the AMO funds about 10 student internships to work on the HPC4Mfg program each year.

In her talk, Diachin highlighted several projects. The LIFT consortium in collaboration with the University of Michigan and Livermore is working to predict the strength of lightweight aluminum lithium alloys produced under different process conditions. Implemented in aircraft designs, the new alloys could save millions of dollars in fuel costs.

SORAA/LLNL: GaN crystal growth

The SORAA/Livermore team is working to develop more efficient LED lightbulbs by modeling ammono-thermal crystal growth of gallium nitride to scale up the process. The goal is to reduce production costs of LED lighting by 20 percent. Project partners say the new high-fidelity model will save years of trial-and-error experimentation typically needed to facilitate large-scale commercial production.

Energy savings in paper-making is the focus of the Agenda2020 Technology Alliance (a paper industry consortium) in collaboration with Livermore and Berkeley. The goal of this project is to use multi-physics models to reduce paper rewetting in the pressing process. The simulations will be used to optimize drying reducing energy consumption by up to 20 percent (saving 80 trillion BTUs and $250 million each year).

In another paper-related project, P&G and their lab partner Livermore are using HPC to evaluate different microfiber configurations “to optimize the drying time while maintaining user experience.” The project resulted in the development of a new mesh tool, called pFiber, that reduces the product design cycle by a factor of two for smaller numbers of fibers and processing cores, and by a factor of eight for higher fiber counts using a larger number of cores.

This P&G project also illustrates the return on investment for the laboratories. The example represents the largest non-benchmark run done with the Paradyn code at Livermore. “These are very challenging problems that the industry is putting forward that are stretching the capabilities and making our capabilities at the national labs more robust,” said Diachin.

One area that is receiving a lot of attention is additive manufacturing, which is broadly used among multiple industry sectors and thus fits with the role of HPC4Mfg to foster high-impact innovation. “It’s a very hot topic for modeling and simulation, both to better understand the processes and the properties of the resultant parts,” said Diachin.

A collaboration involving United Technologies Research Center (UTRC), Livermore and Oak Ridge is one of the projects studying this industrial process. Their focus is on dendrite growth in additive manufacturing parts. UTRC is one of those companies that has a lot of sophisticated modeling and simulation experience, Diachin explained. “They came to the table with some models that they had in hand that they could run in two dimensions, but they weren’t able to take into three dimensions, so the collaboration is taking the models that they have and looking at implementing them directly in a code at Livermore called AMP and running that to much larger scale. At the same time, at Oak Ridge, there are alternate models that can be used to model these processes, so they are developing these alternate models and then they will compare and contrast these different models to understand the process better. So it’s a very interesting approach.”

Once the projects create these large-scale models in partnership with the labs, there can be a need to then down-scale the applications to employ them in industrial settings. This is where reduced order modeling comes in. “This can be a very nice use of the resources and expertise at the labs,” Diachin told HPCwire. “The way reduced order models often work is you run very large-scale, fine-resolution, detailed simulations of a particular phenomenon and from that you can extract basis vectors from a number of different parameter runs. You can then use those basis vectors to create a much smaller representation of the problem – often two to three orders of magnitude smaller. Problems that required high-performance computing can then be run on a small cluster or even a desktop and you can do more real-time analysis within the context of the parameter space you studied with the large-scale run. That’s a very powerful tool for process optimization or the process decisions you have to make in an operating environment. “

HPC4Mfg focuses on manufacturing right now, but the concept is designed to be scalable. “We get a lot of concept papers that are very appropriate for other offices potentially within the Department of Energy and we have been informally socializing them. With the next solicitation we’re going to make that more formal. Jeff Roberts from Livermore National Lab has been working with Mark Johnson at the AMO and others to really expand the program into a lot of different areas,” said Diachin.

The program runs two solicitations per year, in the fall and in the spring. The next funding round will be announced in mid to late March with concept papers due the following month. After the announcement, the HPC4Mfg program management team will be conducting webinars to explain the goals of the program, the submission process and answer any questions.

Announced Projects:

Spring 2016 Solicitation Selectees

Fall 2015 Solicitation Selectees

Seedlings

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Why HPC Storage Matters More Now Than Ever: Analyst Q&A

September 17, 2021

With soaring data volumes and insatiable computing driving nearly every facet of economic, social and scientific progress, data storage is seizing the spotlight. Hyperion Research analyst and noted storage expert Mark No Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together any HPC and AI resources and integrate them with networking, Read more…

What’s New in HPC Research: Solar Power, ExaWorks, Optane & More

September 16, 2021

In this regular feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

AWS Solution Channel

Supporting Climate Model Simulations to Accelerate Climate Science

The Amazon Sustainability Data Initiative (ASDI), AWS is donating cloud resources, technical support, and access to scalable infrastructure and fast networking providing high performance computing (HPC) solutions to support simulations of near-term climate using the National Center for Atmospheric Research (NCAR) Community Earth System Model Version 2 (CESM2) and its Whole Atmosphere Community Climate Model (WACCM). Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

GigaIO Gets $14.7M in Series B Funding to Expand Its Composable Fabric Technology to Customers

September 16, 2021

Just before the COVID-19 pandemic began in March 2020, GigaIO introduced its Universal Composable Fabric technology, which allows enterprises to bring together Read more…

Cerebras Brings Its Wafer-Scale Engine AI System to the Cloud

September 16, 2021

Five months ago, when Cerebras Systems debuted its second-generation wafer-scale silicon system (CS-2), co-founder and CEO Andrew Feldman hinted of the company’s coming cloud plans, and now those plans have come to fruition. Today, Cerebras and Cirrascale Cloud Services are launching... Read more…

AI Hardware Summit: Panel on Memory Looks Forward

September 15, 2021

What will system memory look like in five years? Good question. While Monday's panel, Designing AI Super-Chips at the Speed of Memory, at the AI Hardware Summit, tackled several topics, the panelists also took a brief glimpse into the future. Unlike compute, storage and networking, which... Read more…

ECMWF Opens Bologna Datacenter in Preparation for Atos Supercomputer

September 14, 2021

In January 2020, the European Centre for Medium-Range Weather Forecasts (ECMWF) – a juggernaut in the weather forecasting scene – signed a four-year, $89-million contract with European tech firm Atos to quintuple its supercomputing capacity. With the deal approaching the two-year mark, ECMWF... Read more…

Quantum Computer Market Headed to $830M in 2024

September 13, 2021

What is one to make of the quantum computing market? Energized (lots of funding) but still chaotic and advancing in unpredictable ways (e.g. competing qubit tec Read more…

Amazon, NCAR, SilverLining Team for Unprecedented Cloud Climate Simulations

September 10, 2021

Earth’s climate is, to put it mildly, not in a good place. In the wake of a damning report from the Intergovernmental Panel on Climate Change (IPCC), scientis Read more…

After Roadblocks and Renewals, EuroHPC Targets a Bigger, Quantum Future

September 9, 2021

The EuroHPC Joint Undertaking (JU) was formalized in 2018, beginning a new era of European supercomputing that began to bear fruit this year with the launch of several of the first EuroHPC systems. The undertaking, however, has not been without its speed bumps, and the Union faces an uphill... Read more…

How Argonne Is Preparing for Exascale in 2022

September 8, 2021

Additional details came to light on Argonne National Laboratory’s preparation for the 2022 Aurora exascale-class supercomputer, during the HPC User Forum, held virtually this week on account of pandemic. Exascale Computing Project director Doug Kothe reviewed some of the 'early exascale hardware' at Argonne, Oak Ridge and NERSC (Perlmutter), while Ti Leggett, Deputy Project Director & Deputy Director... Read more…

Ahead of ‘Dojo,’ Tesla Reveals Its Massive Precursor Supercomputer

June 22, 2021

In spring 2019, Tesla made cryptic reference to a project called Dojo, a “super-powerful training computer” for video data processing. Then, in summer 2020, Tesla CEO Elon Musk tweeted: “Tesla is developing a [neural network] training computer called Dojo to process truly vast amounts of video data. It’s a beast! … A truly useful exaflop at de facto FP32.” Read more…

Berkeley Lab Debuts Perlmutter, World’s Fastest AI Supercomputer

May 27, 2021

A ribbon-cutting ceremony held virtually at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) today marked the official launch of Perlmutter – aka NERSC-9 – the GPU-accelerated supercomputer built by HPE in partnership with Nvidia and AMD. Read more…

Esperanto, Silicon in Hand, Champions the Efficiency of Its 1,092-Core RISC-V Chip

August 27, 2021

Esperanto Technologies made waves last December when it announced ET-SoC-1, a new RISC-V-based chip aimed at machine learning that packed nearly 1,100 cores onto a package small enough to fit six times over on a single PCIe card. Now, Esperanto is back, silicon in-hand and taking aim... Read more…

Enter Dojo: Tesla Reveals Design for Modular Supercomputer & D1 Chip

August 20, 2021

Two months ago, Tesla revealed a massive GPU cluster that it said was “roughly the number five supercomputer in the world,” and which was just a precursor to Tesla’s real supercomputing moonshot: the long-rumored, little-detailed Dojo system. “We’ve been scaling our neural network training compute dramatically over the last few years,” said Milan Kovac, Tesla’s director of autopilot engineering. Read more…

CentOS Replacement Rocky Linux Is Now in GA and Under Independent Control

June 21, 2021

The Rocky Enterprise Software Foundation (RESF) is announcing the general availability of Rocky Linux, release 8.4, designed as a drop-in replacement for the soon-to-be discontinued CentOS. The GA release is launching six-and-a-half months after Red Hat deprecated its support for the widely popular, free CentOS server operating system. The Rocky Linux development effort... Read more…

Intel Completes LLVM Adoption; Will End Updates to Classic C/C++ Compilers in Future

August 10, 2021

Intel reported in a blog this week that its adoption of the open source LLVM architecture for Intel’s C/C++ compiler is complete. The transition is part of In Read more…

Google Launches TPU v4 AI Chips

May 20, 2021

Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I Read more…

AMD-Xilinx Deal Gains UK, EU Approvals — China’s Decision Still Pending

July 1, 2021

AMD’s planned acquisition of FPGA maker Xilinx is now in the hands of Chinese regulators after needed antitrust approvals for the $35 billion deal were receiv Read more…

Leading Solution Providers

Contributors

Hot Chips: Here Come the DPUs and IPUs from Arm, Nvidia and Intel

August 25, 2021

The emergence of data processing units (DPU) and infrastructure processing units (IPU) as potentially important pieces in cloud and datacenter architectures was Read more…

10nm, 7nm, 5nm…. Should the Chip Nanometer Metric Be Replaced?

June 1, 2020

The biggest cool factor in server chips is the nanometer. AMD beating Intel to a CPU built on a 7nm process node* – with 5nm and 3nm on the way – has been i Read more…

Julia Update: Adoption Keeps Climbing; Is It a Python Challenger?

January 13, 2021

The rapid adoption of Julia, the open source, high level programing language with roots at MIT, shows no sign of slowing according to data from Julialang.org. I Read more…

HPE Wins $2B GreenLake HPC-as-a-Service Deal with NSA

September 1, 2021

In the heated, oft-contentious, government IT space, HPE has won a massive $2 billion contract to provide HPC and AI services to the United States’ National Security Agency (NSA). Following on the heels of the now-canceled $10 billion JEDI contract (reissued as JWCC) and a $10 billion... Read more…

Quantum Roundup: IBM, Rigetti, Phasecraft, Oxford QC, China, and More

July 13, 2021

IBM yesterday announced a proof for a quantum ML algorithm. A week ago, it unveiled a new topology for its quantum processors. Last Friday, the Technical Univer Read more…

Intel Launches 10nm ‘Ice Lake’ Datacenter CPU with Up to 40 Cores

April 6, 2021

The wait is over. Today Intel officially launched its 10nm datacenter CPU, the third-generation Intel Xeon Scalable processor, codenamed Ice Lake. With up to 40 Read more…

Frontier to Meet 20MW Exascale Power Target Set by DARPA in 2008

July 14, 2021

After more than a decade of planning, the United States’ first exascale computer, Frontier, is set to arrive at Oak Ridge National Laboratory (ORNL) later this year. Crossing this “1,000x” horizon required overcoming four major challenges: power demand, reliability, extreme parallelism and data movement. Read more…

Intel Unveils New Node Names; Sapphire Rapids Is Now an ‘Intel 7’ CPU

July 27, 2021

What's a preeminent chip company to do when its process node technology lags the competition by (roughly) one generation, but outmoded naming conventions make it seem like it's two nodes behind? For Intel, the response was to change how it refers to its nodes with the aim of better reflecting its positioning within the leadership semiconductor manufacturing space. Intel revealed its new node nomenclature, and... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire