DOE Sets Sights on Accelerating AI (and other) Technology Transfer

By John Russell

October 3, 2019

For the past two days DOE leaders along with ~350 members from academia and industry gathered in Chicago to discuss AI development and the ways in which industry and DOE could collaborate to foster AI commercialization. The occasion was DOE’s fourth InnovationXLab Summit – the earlier three summits were on advanced manufacturing; grid modernization; and energy storage. Two more are planned: bio-manufacturing in January and quantum science in the late spring.

On the agenda was AI technology and policy. While discussion rarely got into technology weeds, it was nonetheless a fascinating glimpse into AI prospects and challenges as seen by this experienced group and meant to stimulate a broader conversation between industry and DOE. The current Administration clearly shares the view that AI could well become transformative, a point made by Secretary of Energy Rick Perry in his opening remarks and further demonstrated last month by DOE’s formal establishment of an Artificial Intelligence and Technology Office (AITO).

The latest InnovationXLab also seems indicative of the Administration’s desire to drive greater technology transfer from the national labs. (Did you know DOE has its first chief commercialization officer, Conner Prochaska, appointed in 2018 as CCO and director of the DOE Office of Technology Transitions?) Back in July DOE launched its Lab Partnering Service (LPS) – a web portal intended to showcase DOE capabilities and facilitate match-making between potential users and DOE resources.

Broadly the LPS has three parts:

  • Connect with Experts. Unprecedented access to top national lab researchers will allow investors and innovators to connect with relevant subject matter experts, and receive unbiased and non-competitive technical assessments.
  • Technical/Marketing Summaries. Direct access to pre-validated, ready to license, and commercialize technologies.
  • Visual Patent Search. Dynamic online search and visualization database tool for patents associated with DOE laboratories.

Robert Bectel, senior program analyst, for DOE’s Office of Technology Transitions gave a brief talk on LPS capabilities.

“DOE has a lot of places for you to engage,” noted Bectel. “You can gain entry into the into the National Lab infrastructure to use these facilities. You typically do not rent the whole building, you rent a part of the building, [and] when you’re in that building, you’re not necessarily the expert using machinery, you’ve designed the experiment, the experts come from labs. So you are in essence renting a team [or] a tool inside of a facility.

“We have 1,219 technology summaries online. Those are technologies that the national labs have provided to us for purposes of saying these are ready, these are technologies we think are ready to go to market. And here’s how you can connect with us to use them and pull them out quickly. The last thing, of course, is patents. Well, we have about 37,977 patents online where the last 20 years,” Bectel said.

Much was rightly made of DOE’s formidable HPC (four of the top ten fastest computers worldwide) and experimental resources as well as its patent and expertise resources. Driving technology transfer has always been challenging and it seems clear that DOE is looking for ways to become better at it.

Before digging into some of the AI-specific content, it’s perhaps worth quoting Rick Stevens, associate director, Argonne National Laboratory, who played a role as host and panel participant. Panel moderator Dimitri Kusnezov, DOE’s Deputy Under Secretary for Artificial Intelligence and Technology, asked Stevens, “From the point of view of the labs and novel partnerships what is the message from national labs?”

Stevens said, “The first message is we’re kind of open for business. I mean, that’s the first thing. Second thing is that we’re hungry for interesting problems that are going to challenge people. One way you get National Lab scientists motivated isn’t by throwing money at them necessarily – that helps of course – but giving [problems] that they find so compelling that they’re going to stay up all night and work on it over and over again…I’m sure in this room there are people from industry that have some of those problems that keep them up at night, whether it’s cybersecurity or whether it’s grid stability, or whether it’s drug development.”

Tracking DOE’s tech transfer efforts moving forward, particularly given potential game-changers such as AI (near-term) and quantum information science (long-term), will be interesting.

Source: IBM rendition of HPC and AI development timelines

Capturing the full scope of the conference is beyond this brief article. Conference organizers have posted links to portions that were livestreamed (mostly plenary sessions). Presented here are summaries of the first panel (AI Research and DOE National Labs, National AI Initiative, Opportunities for Industry Collaborations in AI), which examined top line opportunities and challenges for AI medicine, energy, crypto-security, and academia.

Panelists included:

  • John Baldoni, CTO of Integral Health, a start-up seeking to use AI in drug discovery and proof of pharmacology. He was also a longtime senior research executive at GlaxoSmithKline.
  • Theresa Christian, manager, corporate strategy, innovation and sustainability, for Exelon, a large power generation and transmission company serving the U.S.
  • Ray Rothrock, chairman and CEO, Red Seal, a cyber-security specialist, which “models entire hybrid data center of public cloud, private cloud, and physical networks.”
  • Stevens, associate lab director at ANL and leader of Argonne’s Exascale Computing Initiative.

To some extent their comments were familiar to AI watchers. The flood of data and diversity of data types in virtually all domains is creating the opportunity for AI methods to create actionable insight. In biomedicine, for example, a vast amount of data is locked up inside pharma. With easier access to this data AI approaches can find association and potential therapies humans and traditional analytics have trouble doing.

Baldoni helped spearhead an effort involving GlaxoSmithKline (GSK), Lawrence Livermore Laboratory, the National Cancer Institute and others to build a central repository. “The consortium we put together is called ATOM– accelerating therapeutic opportunities in medicine,” he said. “They did an experiment using that dark data about a year after it was curated to design a molecule for a specific cancer target that’s of interest in the industry. They used simulations and optimized that molecule in a multi-parameter optimization in a compute run against toxicology, [bio] distribution, efficacy, the ability to synthesize the molecule and the ability of the molecule to distribute in tissue. It took 16 hours to come up with 200 virtual molecules that weren’t in the training set.”

Researchers found promising molecule which is “actually in phase two clinical trials for that cancer target. On average, to get a molecule that would make it to phase two, the industry tests of millions of compounds, synthesis of 10s of thousands of compounds, and takes about five years. So, the computer run took 16 hours.” Unlocking the dark matter in this instance was the big key.

Conversely the flood of data sloshing about in the world also represents vulnerabilities. Rothrock knows this world well.

Theresa Christian (Exelon), Ray Rothrock (Red Seal), and Rick Stevens (ANL)

“Look, AI empowers the bad guys, but it really empowers the good guys. It’s about threat evaluation and it is about defense evaluation. [For example], the infamous video of President Obama giving a speech which he never gave. It’s simply terrifying that you can create, through data and compute, a deceptive speech. I think you can flip that same coin to the other side,” said Rothrock.

“I am a person. I have certain behaviors and habits and so forth. And if my actions are tested, when a decision is made as a leader, a political leader, financial leader, if that decision could be taken through an AI filter to see how far off it would from what I might be expected to decide and if it’s way out there, then you raise a flag and ask, “So why is he making that decision? What’s driving that decision? Having enough data is again key, and, of course, also subject to misuse.”

One challenge Exelon deals with is managing a wide variety of data modalities in real-time.

“So one of our key responsibilities as an ally for giving is to keep the lights on, right. If there’s a winter storm in the city where you live, and that might cause limbs to fall on wires, we need to send crews out to restore the service by repairing those wires and get the power back on as quickly, efficiently and safely as possible. To do that relies on a number of different data streams that we maintain in house for that purpose, including, you know, the type of data that represents the assets we have in the field,” said Christian.

“We need to know everything about the geospatial distribution of our system, but also about the operational crew requirements, and what we can say about the weather that’s coming so we can better understand what the impact is likely to be. So when we think about the opportunity space for AI in an operational challenge like that, one of the really interesting areas for us is to build analytics frameworks that allow for these multiple modalities of data to be combined together so that we can develop solutions that use all those streams in a coordinated way and solve the problem at hand,” she said.

All the panelists commented on workforce issues. There was general agreement that AI is developed most effectively in multi-discipline environments.

“The cyber industry is about a $126 billion [market]. There are 3,000 products out there. A typical large corporation probably like Exelon has 50 or 60 cyber products and only five or 10 people to operate it. Well, that number, it’s a crushing situation. And while you need engineers, for sure, you also need technicians. They don’t need all need a four-year degree, they need a piece of it,” said Rothrock.

Christian said, “We expect that there’s going to be people who are getting trained in other places coming in with the type of expertise to run these systems, whether it’s in the technician role, or the more senior expert roles. But we also are being very intentional about bringing our internal workforce up to speed with the types of skills that they’re going to need in data driven businesses. We’re doing a lot of internal workforce training through some of our analytics initiative internally at Exelon, and we’re finding ways to be more collaborative between our in-house experts and the external community.”

Stay tuned.

Link to conference videos: https://livestream.com/argonnelive

Links to Previous InnovationXLab Summits:

Advanced Manufacturing (May 7-8, 2019)

Oak Ridge, Tennessee (hosted by Oak Ridge National Laboratory)

Grid Modernization (January 24-25, 2019)

Seattle, Washington (hosted by Pacific Northwest National Laboratory)

Energy Storage (September 18-19, 2018)

Menlo Park, California (hosted by SLAC National Accelerator Laboratory)

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pressing needs and hurdles to widespread AI adoption. The sudde Read more…

Quantinuum Reports 99.9% 2-Qubit Gate Fidelity, Caps Eventful 2 Months

April 16, 2024

March and April have been good months for Quantinuum, which today released a blog announcing the ion trap quantum computer specialist has achieved a 99.9% (three nines) two-qubit gate fidelity on its H1 system. The lates Read more…

Mystery Solved: Intel’s Former HPC Chief Now Running Software Engineering Group 

April 15, 2024

Last year, Jeff McVeigh, Intel's readily available leader of the high-performance computing group, suddenly went silent, with no interviews granted or appearances at press conferences.  It led to questions -- what's Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Institute for Human-Centered AI (HAI) put out a yearly report to t Read more…

Crossing the Quantum Threshold: The Path to 10,000 Qubits

April 15, 2024

Editor’s Note: Why do qubit count and quality matter? What’s the difference between physical qubits and logical qubits? Quantum computer vendors toss these terms and numbers around as indicators of the strengths of t Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Computational Chemistry Needs To Be Sustainable, Too

April 8, 2024

A diverse group of computational chemists is encouraging the research community to embrace a sustainable software ecosystem. That's the message behind a recent Read more…

Hyperion Research: Eleven HPC Predictions for 2024

April 4, 2024

HPCwire is happy to announce a new series with Hyperion Research  - a fact-based market research firm focusing on the HPC market. In addition to providing mark Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

Leading Solution Providers

Contributors

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

Intel’s Xeon General Manager Talks about Server Chips 

January 2, 2024

Intel is talking data-center growth and is done digging graves for its dead enterprise products, including GPUs, storage, and networking products, which fell to Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire