Last week (July 22-23), Argonne National Lab, future home to the Intel-Cray Aurora supercomputer, hosted the first in a series of four AI for Science town hall meetings being convened by Department of Energy laboratories. The meetings are aimed at soliciting and collecting “community input on the opportunities and challenges facing the scientific community in the era of convergence of high-performance computing and artificial intelligence (AI) technologies.”
In alignment with DOE missions and the U.S. national AI initiative, the DOE community and their collaborators are being engaged to discuss broadly the opportunities that can be realized by advancing and accelerating the development of AI capabilities for science and science use cases.
“We’re asking the fundamental question: what do we have to do in the AI space to make it relevant for science? The point of the town halls is to get people thinking about what opportunities there are in different scientific domains for breakthrough science that can be accomplished by leveraging AI and working AI into simulation, bringing AI into big data, bringing AI to the facility and so forth,” said Argonne’s Rick Stevens in an interview with HPCwire. Stevens is co-chairing the town hall program along with Berkeley Lab’s Kathy Yelick and Oak Ridge Lab’s Jeff Nichols.
Each of the four town halls (held at Argonne, Oak Ridge, Berkeley, and in Washington, DC) encompasses high-level talks, application tracks and cross-cutting breakout sessions. The two-day Argonne event drew about 350 people, DOE and university researchers, primarily from the Midwest region, with about 150 people coming from other parts of the country (including broad lab participation).
The first day focused on application breakouts by science domain (e.g., chemistry, mathematics, materials, climate, biology, high energy physics, nuclear physics); on day two, participants were reoriented to cross-cutting topics, spanning fundamental math issues, software issues, data issues, understandability issues, uncertainty quantification, facilities, integration of simulation and AI, computer architecture directions, among others.
The town halls will result in an integrated report to be published by the end of the year, which will inform strategic planning, and help shape programs and budgets.
If the town hall format sounds familiar, you may recall that a series of exascale town halls was held in 2007, helping sow the seeds for the US Department of Energy’s Exascale Computing Initiative (ECI) and Exascale Computing Project (ECP). Together these activities, with a focus on codesign, application readiness and “capable exascale,” are preparing the U.S. to stand up multiple exascale-class systems in the 2021-2023 timeframe.
Learnings from the AI town halls could conceivably lead to a more targeted, and potentially funded, policy not unlike how the exascale town halls helped establish a robust national exascale program.
“We’ve got this huge exascale program and we’re now asking the question, what’s the opportunity for AI in the science space, particularly in the context of DOE but also more broadly with NIH and other agencies,” said Stevens, Argonne’s associate laboratory director for computing, environment and life sciences.
Maintaining leadership in AI is the primary directive of the U.S. national AI initiative, launched by the White House in February. The announcement and subsequent OMB budget priority letters that went out to the agencies declared progress in AI as the number one priority across the agencies.
That AI initiative also challenged agencies to come up with plans, to determine resource levels, and make progress on managing their data. It laid out a very high level blueprint as to what the country needs to do maintain progress in AI and to complement in the academic and government sector what’s going on at the internet companies, Stevens told HPCwire.
“Clearly there’s huge progress in the internet space, but those Facebooks and Googles and Microsofts and Amazons and so on, those guys are not going to be the primary drivers for AI in areas like high-energy physics or nuclear energy or wind power or new materials for solar or for cancer research – it’s not their business focus,” Stevens maintained. “We recognize that the challenge is how to leverage the investments made by the private sector to build on those [advances] to add what’s missing for scientific applications — and there’s lots of things missing. And then figure out what the computing community has to do to position the infrastructure and our investments in software and algorithms and math and so on to bring the AI opportunity closer to where we currently are.”
The overarching agenda for the AI for Science town hall program includes a set of “charge questions” aimed at surfacing the most compelling problems where AI could have an impact and identifying the requirements at the research and facility level needed to realize these opportunities.
We posed one of these questions to Stevens: What are 3-5 open questions that need to be addressed to maximally contribute to AI impact in the science domains and AI impact in the enabling technologies?
His top three:
+ Uncertainty quantification, i.e. model confidence — “When you’re doing cat videos, no one cares what your confidence interval is, where your error bars are exactly, but in a scientific, a medical application, you need to know that the answer is likely to be correct.”
+ The direction of AI architectures – “Are the architectures that are being developed to accelerate general AI research – are they in fact even what we need for the types of data and the types of networks and systems we need to build for applying AI in science?”
+ Injecting AI with ground truth – “Our first way of thinking about the world is in some sense, do we have a mechanistic model of it, a physical model to simulate? And most of the progress in AI involves non-physical modeling. If you think about natural language processing, there’s no physical model for that. If you think about computer vision, most of the kinds of things that people do with computer vision, there’s no physical model; there is no ground truth that you can generate from first principles. But in many scientific areas, we’ve had 400 years of progress, in physics and chemistry and biology and so forth, and we have a lot of physical understanding. How do we use that physical understanding combined with data to build AI models that actually internalize that physical understanding? In other words, having these models be able to make predictions in the world as opposed to in some abstract space.”
The AI for Science Town Hall series continues at Oak Ridge National Laboratory (Aug. 20-21, 2019), Lawrence Berkeley National Laboratory (Sept. 11-12, 2019) and Washington DC (Oct. 22-23, 2019).
Link for more info: https://web.cvent.com/event/b03cf98d-d350-4f66-805a-1a19f03bdcf8/summary