As the amount of surveillance data, particularly in video, image and satellite or GIS-driven format continues to grow, the government and military are looking for ways to boost their processing capabilities–and push it into real-time context.
When it comes to video, image and signal processing for these users, the problem is partially one of data volume. For instance, even back in 2009, the U.S. Air Force collected over 27 years worth of video from Iraq and Afghanistan–a well that’s useless without the light of both analytics and processing power. Add the need to hunt through similarly vast volumes based on ever-changing current data and the problem gets more complex.
One option that defense is looking to on this front is GPUs, which from the beginning have been a fit for video and image processing as well as for certain machine learning and pattern matching needs common for the geospatial intelligence community. For the most part, these workloads hit the sweet spot for GPU use; they’re mostly pretty straightforward linear algebra–basically a lot of matrix multiplication. In the case of the defense users, the GPUs are chewing on both the image and video processing sides, but they are also speeding the machine learning and neural network-like processes on that data.
The key to GPU adoption in this community is being powered by the need for speed. Specifically, real-time results. In the military context, for example, having all the GIS data in the world isn’t useful if it doesn’t reflect actual conditions on the ground. According to NVIDIA’s GM of the Tesla division, for some common geospatial intelligence applications, including the GIS-based situational awareness program Luciad Lightspeed, users are seeing 100 calculations per second with GPU over one per second with CPU–a difference that could mean seeing (or not seeing) a rapidly-developing threat.
Gupta says that while the defense market, especially on the geospatial intelligence front, hasn’t been front and center in much of their news, it actually makes up between 20% and 25% of the overall revenue in the Tesla group, powered by the need to get faster results on massive amounts of video and image data.
This week the GPU giant reached out to this growing area by offering up a platform for the geospatial intelligence community with its GeoInt Accelerator. The goal of the packaged offering is to provide an integrated suite of tools for geospatial intelligence analysts as well as that community’s specialty developers that are primed to take advantage of GPU speedups.
In addition to offering a number of key applications relevant to this community (from situational awareness, satellite imagery and object detection software) they’ve also pulled together a number of relevant libraries for defense contractors and integrators to use for building GPU-accelerated applications, including their own Performance Primitives, MATLAB Imaging Toolkit, CUDA FFT, Accelereyes’ ArrayFire and other contents.
NVIDIA is pushing this out via a number of channels–as embedded GPUs inside GE Intelligent Platforms and Curtis-Wright (which cater to this market) as well as in workstation and server forms from a number of its partners, including Dell, HP, IBM and others.
As it stands, there is a lot of variation among users since their purposes, software and performance needs vary. In general though, if users stick to the recommended configuration for both servers and workstations, they’re looking at around $5000 for a loaded workstation and on the $10,000 side or a tick below for the server piece. Of course, it’s not uncommon for these areas to make use of both servers and workstations for the image, video and signal processing–especially when real-time is the goal. In this case, much of the real-time work is done at the workstation, says Gupta.
Users of GPU acceleration for image, video and signal processing in the GeoInt area of defense already include the Army Research Labs, BAE Systems, Boeing, SAIC, NATO, Raytheon and others.