Visit additional Tabor Communication Publications
January 22, 2010
June 24 -- Video gaming computers and video game consoles available today typically contain a graphics processing unit (GPU), which is very efficient at manipulating and displaying computer graphics. However, the unit's highly parallel structure also makes it more efficient than a general-purpose central processing unit for a range of complex calculations important to defense applications.
Researchers in the Georgia Tech Research Institute (GTRI) and the Georgia Tech School of Electrical and Computer Engineering are developing programming tools to enable engineers in the defense industry to utilize the processing power of GPUs without having to learn the complicated programming language required to use them directly.
"As radar systems and other sensor systems get more complicated, the computational requirements are becoming a bottleneck," said GTRI senior research engineer Daniel Campbell. "We are capitalizing on the ability of GPUs to process radar, infrared sensor and video data faster than a typical computer and at a much lower cost and power than a computing cluster."
Mark Richards, a principal research engineer and adjunct professor in the School of Electrical and Computer Engineering, is collaborating with Campbell and graduate student Andrew Kerr to rewrite common signal processing commands to run on a GPU. This work is supported by the U.S. Defense Advanced Research Projects Agency and the U.S. Air Force Research Laboratory.
The researchers are writing functions defined in the Vector, Signal and Image Processing Library (VSIPL) to run on GPUs. VSIPL is an open standard developed by embedded signal and image processing hardware and software vendors, academia, application developers and government labs. GPU VSIPL is available for download at http://gpu-vsipl.gtri.gatech.edu/.
The researchers are currently writing the functions in Nvidia's CUDA language, but the underlying principles can be applied to GPUs developed by other companies, according to Campbell. With GPU VSIPL, engineers can use high-level functions in their C programs to perform linear algebra and signal processing operations, and recompile with GPU VSIPL to take advantage of the speed of the GPU. Studies have shown that VSIPL functions operate between 20 and 350 times faster on a GPU than a central processing unit, depending on the function
and size of the data set.
"The results are not surprising because GPUs excel at performing repetitive arithmetic tasks like those in VSIPL, such as signal processing functions like Fourier transforms, spectral analysis, image formation and noise filtering," noted Richards. "We've just alleviated the need for engineers to understand the entire GPU architecture by simply providing them with a library of routines that they frequently use."
The research team is also assessing the advantages of GPUs by running a library of benchmarks for quantitatively comparing high-performance, embedded computing systems. The benchmarks address important operations across a broad range of U.S. Department of Defense signal and image processing applications.
Preliminary studies have shown several of the benchmarks have straightforward parallelization schemes that result in faster operation without requiring significant optimization. For other benchmarks, additional research needs to be conducted into optimizing the use of multiple GPUs.
For the future, the researchers plan to continue expanding the GPU VSIPL, develop additional defense-related GPU function libraries and design programming tools to utilize other efficient processors, such as the cell broadband engine processor at the heart of the PlayStation 3 video game console.
Source: Abby Vogel, Georgia Institute of Technology
In quieter times, sounding the bell of funding big science with big systems tends to resonate further than when ears are already burning with sour economic and national security news. For exascale's future, however, the time could be ripe to instill some sense of urgency....
In a recent solicitation, the NSF laid out needs for furthering its scientific and engineering infrastructure with new tools to go beyond top performance, Having already delivered systems like Stampede and Blue Waters, they're turning an eye to solving data-intensive challenges. We spoke with the agency's Irene Qualters and Barry Schneider about..
Large-scale, worldwide scientific initiatives rely on some cloud-based system to both coordinate efforts and manage computational efforts at peak times that cannot be contained within the combined in-house HPC resources. Last week at Google I/O, Brookhaven National Lab’s Sergey Panitkin discussed the role of the Google Compute Engine in providing computational support to ATLAS, a detector of high-energy particles at the Large Hadron Collider (LHC).
May 23, 2013 |
The study of climate change is one of those scientific problems where it is almost essential to model the entire Earth to attain accurate results and make worthwhile predictions. In an attempt to make climate science more accessible to smaller research facilities, NASA introduced what they call ‘Climate in a Box,’ a system they note acts as a desktop supercomputer.
May 22, 2013 |
At some point in the not-too-distant future, building powerful, miniature computing systems will be considered a hobby for high schoolers, just as robotics or even Lego-building are today. That could be made possible through recent advancements made with the Raspberry Pi computers.
May 16, 2013 |
When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2.
May 15, 2013 |
Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter.
The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.