The rise of machine learning bolstered by heterogeneous computing as an important tool in science seems to gather more evidence daily. Recently a group of scientists from Fermilab, MIT, CERN, University of Washington and elsewhere demonstrated a new ML technique for accelerating identification of high energy particle signatures using Microsoft’s Project Brainwave platform that deploys FPGAs. The researchers were able to demonstrate a 30x to 175x reduction in time required compared to existing methods in their work which they emphasize is a POC effort.
The researchers have written a paper posted on arXiv (FPGA-accelerated machine learning inference as a service for particle physics computing) and a there’s brief account of their work posted on the MIT website. In brief, the team was able to train their new system to identify images of top quarks, the most massive type of elementary particle, some 180 times heavier than a proton.
Repeating a refrain well-known to computer scientists, the physics researchers note in their paper the importance of the emergence of heterogeneous computer architectures (CPUs plus accelerators) to speed a wide range of computations including AI methods. Finding code optimized for these new systems, however, can be problematic.
They write, “To capitalize on this new wave of heterogeneous computing and specialized hardware, particle physicists have two primary options:
- Adapt domain-specific algorithms to run on specialized accelerator hardware. This option takes advantage of specific human expert knowledge, but can be challenging to implement on new and potentially changing hardware platforms with different computing paradigms (such as CUDA or Verilog).
- Design ML algorithms to replace domain-specific algorithms. This option has the advantage of running natively on specialized hardware, but it can be a challenge to map specific physics problems onto ML solutions.
In this instance, the researchers chose the second option where a known ML algorithm is adapted to solve the physics problem:
“We focus on the acceleration of the ResNet-50 convolutional neural network model and adapt it to physics applications. As an example, we interpret jets, collimated sprays of particles produced in LHC collisions, as 2D images that are classified by ResNet-50. We keep the same architecture but train new weights to distinguish top quark jets from light quark and gluon jets. Using a publicly available dataset, we compare our model against other state-of-the-art models in the literature and find similarly excellent performance. We also discuss the potential for Brainwave to be used in other particle physics applications. For example, neutrino event reconstruction deploys large convolution neural networks in their experiments and large network inferences are a bottleneck in their cur- rent computing workflow. Coprocessor-accelerated machine learning inference could be deployed for such neutrino experiments today.”
They also commented on the Microsoft Brainwave platform: “…FPGAs as a computing solution offers a combination of low power usage, parallelization, and programmable hardware. Another important aspect of FPGA inference for the particle physics community, compared to GPU acceleration, is that batching is not required for high performance; FPGA performance is not diminished for serial processing. The Brainwave system, in particular, has demonstrated the use of FPGAs in a cloud system to accelerate ML inference at large scale.”
It’s an interesting paper and one of what seems like a number of emerging efforts by domain scientists to tackle use of AI on heterogeneous architectures and capture their lessons for others to take advantage of.
Link to paper: https://arxiv.org/pdf/1904.08986.pdf
Artificial intelligence interfaced with the Large Hadron Collider can lead to higher precision in data analysis, which can improve measurements of fundamental physics properties and potentially lead to new discoveries. Image credit: FermiLab