A Platform for Smart Manufacturing

By Tiffany Trader

September 25, 2014

21st century market dynamics put a great deal of pressure on manufacturers to operate differently. Driving forces are many but include the need to satisfy customer demands quickly and to deal with energy constraints and environmental concerns. On the flip side, the growth of network technology in tandem with service-oriented architectures can have a transformative effect by providing real-time insight into manufacturing processes. This is the basis of smart manufacturing, which applies networked information-based technologies throughout the manufacturing and supply chain enterprise to achieve increased efficiency, productivity, competitive advantage, and ultimately better ROI.

When it comes to the role of HPC in manufacturing, much of the focus has been given to virtual design and prototyping, using computer modeling and simulation for product design and improvement. Smart manufacturing cuts a wider path, leveraging data and information to enable proactive and intelligent manufacturing decisions.

To find out more about this emerging paradigm, HPCwire spoke with Jim Davis, CIO of UCLA and cofounder of the Smart Manufacturing Leadership Coalition (SMLC), an organization that is driving standards in processes and developing the nation’s first open smart manufacturing platform.

“We still interface with the design side,” Davis said, “but our emphasis is on the real-time nature of manufacturing. We’re interested in real-time data, the real-time use of computation/analytics, the orchestration of the software into actionable forms that are interfacing with automation and control, or with real-time decision-making or with real-time events at the supply-chain level. So the notion of time, real-time, actionable tasks and decision-making are what distinguishes smart manufacturing from the design chain.”

Davis goes on to explain that when his group began looking at a number of different industry segments, a common theme emerged that they needed the ability to access computation/analytics in a much better way. They needed to be able to scale IT infrastructure; they needed the connectors to interface with automation and control or factory platforms in a better way, but at the same time, they needed to be able to merge data and orchestrate it for broader kinds of metrics that would extend across offices, or supply chains, or operations.

“This took us down the path of platform technology, and led to the development of our Smart Manufacturing Platform,” said Davis. “We’ve been looking at a whole set of services that allows the computation analytics and so forth to be accessed at scale for real-time actionable use.”

“At the platform level, there is quite a bit of overlap with the design, and in fact the design models make really good sense of the manufacturing space and vice versa but design is distinctly different from manufacturing the actually delivery.”

Last year, SMLC won a Department of Energy contract to develop the nation’s first open smart manufacturing technology platform for collaborative industrial networked information applications. The first two test beds funded by the $10 million award are at a General Dynamics Army Munitions plant to optimize heat treating furnaces and at a Praxair Hydrogen Processing plant to optimize steam methane reforming furnaces. The test bed project technologies stand to reduce annual generation of CO2 emissions by 69 million tons, and waste heat by 1.3 quads, or approximately 1.3 percent of total US energy use.

In the case of the steam methane reforming furnace, Davis explains that managing the furnace and its energy use in a better way is a good fit for a high-fidelity computational fluid dynamics model. Understanding flow and heat distribution characteristics within the furnace has been difficult because the harshness of the furnace environment tends to preclude sensor placement. Now project participants are working to put infrared cameras around the furnace that allows the internals of the furnace to be measured and visualized on a real-time basis. Then they’re taking that data and bringing it together with other measurements using a computational fluid dynamics model to predict the overall heat distribution, optimize it and then update a control model that’s running the plant.

“We use a computational-fluid dynamics model to predict and update parameters in a control model and that allows us to run this in real time,” Davis explained. “There’s a substantial energy savings by using the high-fidelity model.”

The team is doing the model development using a 12,000 core UCLA cluster. While the application isn’t optimized to use all the cores, there are sufficient computational resources such that compute times went from a matter of days and weeks down to just hours, tractable ranges from a process standpoint.

SMLC is also working with another company that fabricates metal parts, and this plant involves heating and foraging steps, heat treatment steps, followed by shaping and machining steps. By using modeling to achieve the right metallurgical properties, the company saves on the machining maintenance, machine time, and machine utilities. Doing it this way also saves energy in the heat treatment process.

“This is an example of a discrete process where we’re actually able to save electricity and gas/fuel-based energy in substantial ways and at the same time improve the production and quality of the product,” said Davis.

SMLC analyzed across industries, across manufacturing structures, and across problems, and put together a set of requirements which were used to spec the Smart Manufacturing Platform. The platform is based on the services infrastructure developed by Nimbis Services, the originator of the cloud-based technical computing marketplace. SMLC added a unique workflow-as-a-service layer that allows companies to select and put together different components ranging from ‘how do I collect data?’ to ‘how do I analyze it?’ and finally ‘how do I interface it back with the plant?’ Put another way, the workflow-as-a-service layer arranges a series of pieces of code into an organized format that can be put into actionable use.

SM Platform Apps Store

SMLC and its partners are now in the process of building out the platform against specific test beds in automotive, food, ammunition, gas, refining, chemicals, and pharmaceuticals. The prototype contains a vertical stack of all the services – the computational and storage layers, the cloud management layers, and the workflow-as-a-service layer – and it has the ability to bring those environments together. The next step is building out capabilities and robustness within each of the layers, so for the cloud management layer, for example, they will be implementing OpenStack.

“We’re seeing a set of tools that are relatively invariant across companies,” said Davis, “these tools have to do with access to computational resources, the ability to spin up and down instances of computation. The platform is basically architecting out those invariant elements and leaving a layer called the smart manufacturing marketplace, a layer in which the companies can come in and select different components that they need that are specific to their own uses and missions.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's latest weapon in the AI battle with GPU maker Nvidia and clou Read more…

ISC 2024 Student Cluster Competition

May 16, 2024

The 2024 ISC 2024 competition welcomed 19 virtual (remote) and eight in-person teams. The in-person teams participated in the conference venue and, while the virtual teams competed using the Bridges-2 supercomputers at t Read more…

Grace Hopper Gets Busy with Science 

May 16, 2024

Nvidia’s new Grace Hopper Superchip (GH200) processor has landed in nine new worldwide systems. The GH200 is a recently announced chip from Nvidia that eliminates the PCI bus from the CPU/GPU communications pathway.  Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of the last panels at ISC 2024 — the discussion was fascinat Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can uncover patterns, generate insights, and make predictions that Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top500 list of the fastest supercomputers in the world. At s Read more…

Google Announces Sixth-generation AI Chip, a TPU Called Trillium

May 17, 2024

On Tuesday May 14th, Google announced its sixth-generation TPU (tensor processing unit) called Trillium.  The chip, essentially a TPU v6, is the company's l Read more…

Europe’s Race towards Quantum-HPC Integration and Quantum Advantage

May 16, 2024

What an interesting panel, Quantum Advantage — Where are We and What is Needed? While the panelists looked slightly weary — their’s was, after all, one of Read more…

The Future of AI in Science

May 15, 2024

AI is one of the most transformative and valuable scientific tools ever developed. By harnessing vast amounts of data and computational power, AI systems can un Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

ISC 2024 Keynote: High-precision Computing Will Be a Foundation for AI Models

May 15, 2024

Some scientific computing applications cannot sacrifice accuracy and will always require high-precision computing. Therefore, conventional high-performance c Read more…

Shutterstock 493860193

Linux Foundation Announces the Launch of the High-Performance Software Foundation

May 14, 2024

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, is excited to announce the launch of the High-Performance Softw Read more…

ISC 2024: Hyperion Research Predicts HPC Market Rebound after Flat 2023

May 13, 2024

First, the top line: the overall HPC market was flat in 2023 at roughly $37 billion, bogged down by supply chain issues and slowed acceptance of some larger sys Read more…

Top 500: Aurora Breaks into Exascale, but Can’t Get to the Frontier of HPC

May 13, 2024

The 63rd installment of the TOP500 list is available today in coordination with the kickoff of ISC 2024 in Hamburg, Germany. Once again, the Frontier system at Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Leading Solution Providers

Contributors

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

The NASA Black Hole Plunge

May 7, 2024

We have all thought about it. No one has done it, but now, thanks to HPC, we see what it looks like. Hold on to your feet because NASA has released videos of wh Read more…

Intel Plans Falcon Shores 2 GPU Supercomputing Chip for 2026  

August 8, 2023

Intel is planning to onboard a new version of the Falcon Shores chip in 2026, which is code-named Falcon Shores 2. The new product was announced by CEO Pat Gel Read more…

Some Reasons Why Aurora Didn’t Take First Place in the Top500 List

May 15, 2024

The makers of the Aurora supercomputer, which is housed at the Argonne National Laboratory, gave some reasons why the system didn't make the top spot on the Top Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

How the Chip Industry is Helping a Battery Company

May 8, 2024

Chip companies, once seen as engineering pure plays, are now at the center of geopolitical intrigue. Chip manufacturing firms, especially TSMC and Intel, have b Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire