Late last week the Department of Energy released its promised AI for Science report. The massive (224-page) effort is intended to identify AI opportunities and potentially lay the groundwork for developing an Exascale-like initiative for advancing the use of AI in Science. Sixteen domain areas spanning high energy physics and material science to all aspects of computational technology are encompassed.
HPCwire plans an in-depth article around the report in the near future. Presented here is just a glimpse of its scope and a link to the report itself. Written by six prominent DoE researchers – Rick Stevens and Valerie Taylor (Argonne National Laboratory); Jeff Nichols and Arthur Barney McCabe (Oak Ridge National Laboratory); and Kathy Yelick and David Brown (Lawrence Berkeley National Laboratory) – the AI for Science reports seeks to summarize and prioritize the core ideas discussed by more than 1000 attendees to DoE’s series of town hall meetings held between July and October of last year.
As noted by the authors, “[P]articipants anticipate the use of AI methods to accelerate the design, discovery, and evaluation of new materials, and to advance the development of new hardware and software systems; to identify new science and theories within increasingly high-bandwidth instrument data streams; to improve experiments by inserting inference capabilities in control and analysis loops; and to enable the design, evaluation, autonomous operation, and optimization of complex systems from light sources to HPC data centers; and to advance the development of self-driving laboratories and scientific workflows.”
Domain areas tackled include:
- Chemistry, Materials, and Nanoscience
- Earth and Environmental Sciences
- Biology and Life Sciences
- High Energy Physics
- Nuclear Physics
- Fusion
- Engineering and Manufacturing
- Smart Energy Infrastructure
- AI for Computer Science
- AI Foundations and Open Problems
- Software Environments and Software Research
- Data Life Cycle and Infrastructure
- Hardware Architectures
- AI for Imaging
- AI at the Edge
- Facilities Integration and AI Ecosystem
It’s a big report (link to report). Each section includes: state of the art; major challenges; advances in the next decade; accelerating development; expected outcomes; and references.
All of the domain discussions are interesting. Consider this brief except from that hardware section describing needs:
“At one extreme, systems with thousands of specialized architectures (e.g., NVIDIA Volta and AMD MI60 GPUs, FPGAs from Intel and Xilinx, Google TPUs, SambaNova, Groq, Cerebras) are required to train AI models from immense datasets. For example, Google’s TPU pod has 2048 TPUs and 32 terabytes of memory and is used for AI model training; its specialized tensor processors provide 100,000 tera-ops for AI training and inference. In addition, they are coupled directly to Google’s cloud, a massive data infrastructure (>100 petabytes). The progress of the Google TPU in its use for Alpha Go series of matches demonstrates that codesign—the refinement of hardware, software, and datasets for solving a specific goal—provides major benefits to performance, power, and quality.
“At the other end of the spectrum, edge devices must often be capable of low latency inference at very low power. Industry has invested heavily in a variety of edge computing devices for AI including tensor calculation accelerators (e.g., ARM Pelion, NVIDIA T4, Google’s Edge TPU, and Intel’s Movidius) and neuromorphic devices (e.g., IBM’s TrueNorth and Intel’s Loihi).
“Experts expect dramatic improvements in the compute capability and energy efficiency of these devices over the next decade as they are further refined. For example, NVIDIA recently released its Jetson AGX Xavier platform, which operates at less than 30W and is meant for deploying advanced AI and computer vision algorithms at the edge using many specialized devices such as hardware accelerators (i.e., DLAs) for fixed-function convolutional neural networks (CNNs) inference. Another example is Tesla’s FSD Chip, which can deliver 72 tera-ops (72×1012 operations per second) at 72 watts and support capabilities that can respond in 10 milliseconds (driving speed response) with high reliability.
“In contrast, DOE’s applications can require responses 100,000x faster—100 nanoseconds for real-time experiment optimization in electron microscopy or APS experiments where the samples degrade rapidly under high-energy illumination (see Chapter 14, AI for Imaging).”
Stay tuned for HPCwire’s full report.
Link to AI in Science report: https://anl.app.box.com/s/bpp2xokglo8z8qiw7qzmgtsnmhree4p0