Deep neural networks are being mustered by U.S. military researchers to marshal new technology forces on the Internet of Battlefield Things.
U.S. Army and industry researchers said this week they have developed a “confidence metric” for assessing the reliability of AI and machine learning algorithms used in deep neural networks. The metric seeks to boost reliability by limiting predictions based strictly on the system’s training. The goal is to develop AI-based systems that are less prone to deception when presented with information beyond their training.
SRI International has been working since 2018 with the Army Research Laboratory as part of the service’s Internet of Battlefield of Things Collaborative Research Alliance. The partners are investigating methods to “harden” machine learning algorithms to be less susceptible to what researchers believe will be AI countermeasures.
Among the goals is creating “the next generation of algorithms that are robust and resilient,” said Army scientist Brian Jalaian. The service lab’s approach “can be added as an additional block to many of the Army’s modern algorithms using modern machine learning algorithms that are based on deep neural networks used for visual imagery,” Jalaian added.
The researchers expect to apply the new metric to Army command and control and decision support systems as well as precision fire weapons.
The Army-industry team published research on the their neural network metric here. The approach, dubbed “attribution-based confidence metric,” characterizes whether the combination of deep neural network outputs and inputs can be trusted.
Deep neural networks “are known to be brittle on inputs outside the training distribution and are, hence, susceptible to adversarial attacks,” the researchers noted.
They added that the metric does not require access to training data, nor does it require training of a calibration model using a validation data set. “Hence, the new metric is usable even when only a trained model is available for inference,” they added.
The Army researchers are also working with the AI community to develop containerized software for gauging confidence in algorithms running in a range of applications.
Among the applications is the Internet of Battlefield Things that would use resilient neural network models to deploy networks of smart devices. Those battlefield devices would have to be hardened against cyberattacks and other exploits. The confidence metric is seen as a first step in building trust in the Army’s next generation of AI-based systems.
“We did not have an approach to detect the strongest state-of-the-art attacks such as [adversarial] patches that add noise to imagery, such that they lead to incorrect predictions,” Jalaian said.
The proposed generative model “adjusts aspects of the original input images in the underlying original deep neural network.” The original network’s “response to these generated inputs are then assessed to measure the conformance of the model,” he added.