Emerging Architectures Boost Geospatial Application Performance

By Chenggang Lai, Miaoqing Huang, Xuan Shi, and Haihang You

January 23, 2014

Geospatial data is critical in a variety of applications – including transportation planning, hydrological network and watershed analysis, environmental modeling and surveillance, emergency response, and military operations. As the availability of geospatial data has expanded, its volume has accelerated, creating a variety of challenges and complexities that render traditional desktop-based geographical information systems (GIS) and remote-sensing software incapable of providing the requisite processing power.

Intel’s Many Integrated Core (MIC) architecture and the graphics processing unit (GPU) employ parallelism to achieve scalability with high performance for data-intensive computing over high-resolution spatial data. Our research has demonstrated that hybrid computer clusters equipped with the latest Intel MIC processors and NVIDIA GPUs can achieve a significant performance improvement for a range of typical geospatial applications, with Kriging interpolation, ISODATA, and Cellular Automata as examples. Details of our study are contained in a paper titled “Accelerating Geospatial Applications on Hybrid Architectures” in the proceedings of the 2013 IEEE International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing. The co-authors of the paper were Chenggang Lai, Miaoqing Huang, and Xuan Shi of the University of Arkansas, and Haihang You of the National Institute for Computational Sciences.

Coprocessor architecture

GPU architecture has been evolving for many years. Nvidia is a case in point, having gone through many generations, from G80 to GT200, Fermi, and today’s Kepler. The Kepler GPU architecture contains 15 streaming multiprocessors (SMXes), each of which consists of 192 single-precision cores and 64 double-precision cores. The Kepler architecture provides three advanced features to efficiently share the GPU resources among multiple host threads or processes (i.e., Hyper-Q), flexibly create new kernels on a GPU (i.e., dynamic parallelism), and reduce communication overhead across GPUs through GPUDirect. GPUs are normally used as accelerators in high-performance computer clusters. In a typical MPI-based parallel application, the MPI process executes on a host CPU that in turn allocates the computation to one or more client GPUs.

figure.1.kepler-architecture

NVIDIA’s Kepler GPU architecture. Image source: Lai et al., “Accelerating Geospatial Applications on Hybrid Architectures,” Proceedings of the 2013 IEEE International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing, 1545–1552, 2013.

The first commercially available Intel coprocessor based on MIC architecture is Xeon Phi. It contains up to 61 scalar processors with vector processing units. Direct communication between MIC coprocessors across different nodes is also supported through MPI. The following images show two approaches to parallelizing applications on computer clusters equipped with MIC processors. The first approach is to treat the MIC processors as clients to the host CPUs. The MPI processes will be hosted by CPUs, which will offload the computation to the MIC processors. Multithreading programming models such as OpenMP can be used to allocate many cores for data processing. The second approach is to let each MIC core directly host one MPI process. In this way, the 60 cores on the same die are treated as 60 independent processors while sharing the 8 GB on-board memory on the Xeon Phi 5110P.

figure.2.MIC_Use1.offloading

Offloading approach to implementing parallelism on the MIC cluster. Image source: Lai et al., “Accelerating Geospatial Applications on Hybrid Architectures,” Proceedings of the 2013 IEEE International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing, 1545–1552, 2013.

figure.3.MIC_Use2.directhost

Direct-host approach to implementing parallelism on the MIC cluster. Image source: Lai et al., “Accelerating Geospatial Applications on Hybrid Architectures,” Proceedings of the 2013 IEEE International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing, 1545–1552, 2013.

Benchmarks

Three different types of use case served as the benchmarks for this study: Kriging interpolation (embarrassingly parallelism), the Iterative Self-organizing Data-analysis Technique Algorithm (ISODATA) (loose communication in the computation), and Cellular Automata (intense communication).

Kriging is a geostatistical estimator that infers the value of a random field at an unobserved location, and can be viewed as a point interpolation that reads input point data and returns a raster grid with calculated estimations for each cell.

ISODATA is one of the most frequently used algorithms for unsupervised image classification algorithms in remote sensing applications. In general, it can be implemented in three steps: (1) calculate the initial mean value of each class; (2) classify each pixel to the nearest class; and (3) calculate the new class means based on all pixels in one class. The second and third steps are repeated until the change between two iterations is small enough. When multiple processors are used, only one summation from all processors is required in each iteration.

Cellular Automata are commonly used in a variety of geospatial modeling and simulation. Game of Life (GOL), invented by British mathematician John Conway, is a well-known generic Cellular Automaton that consists of a collection of cells that can live, die or multiply based on a few mathematical rules. The universe of the GOL is a two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, alive (‘1’) or dead (‘0’). Every cell interacts with its eight neighbors, which are the cells that are horizontally, vertically, or diagonally adjacent.

Experiment setup

We conducted our experiments on two platforms, the National Science Foundation-sponsored Keeneland supercomputer and Beacon supercomputer. Keeneland Initial Delivery System (KIDS) is a 201 Teraflop, 120-node HP SL390 system with 240 Intel Xeon X5660 CPUs and 360 Nvidia Fermi GPUs, with the nodes connected by a QDR InfiniBand network. Each node has two 6-core 2.8 GHz Xeon CPUs and 3 Tesla M2090 GPUs. The Nvidia M2090 GPU contains 512 CUDA cores and 6 GB GDDR5 on-board memory. The Beacon system (a Cray CS300-AC Cluster Supercomputer) offers access to 48 compute nodes and 6 I/O nodes joined by an FDR InfiniBand interconnect providing 56 Gb/s of bi-directional bandwidth. Each compute node is equipped with 2 Intel Xeon E5-2670 8-core 2.6 GHz processors, 4 Intel Xeon Phi (MIC) coprocessors 5110P, 256 GB of RAM, and 960 GB of SSD storage. Each I/O node provides access to an additional 4.8 TB of SSD storage. For each benchmark, we had three parallel implementations on two clusters. i.e., MPI+CPU, MPI+MIC, MPI+GPU.

Results

figure.4.a.kriging-300xfigure.4.b.isodata-300xfigure.4.c.gol_32768-300x

Performance of benchmarks on four different configurations: (a) Kriging, (b) ISODATA, (c) GOL. Image source: Lai et al., “Accelerating Geospatial Applications on Hybrid Architectures,” Proceedings of the 2013 IEEE International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing, 1545–1552, 2013.

We want to show the strong scalability of the parallel implementations. Therefore, the problem size is fixed for each benchmark while the number of participating MPI processes is increased.

In the Kriging interpolation benchmark, the source dataset is evenly partitioned among all MPI processes along the row-major. The computation in each MPI process is purely local, i.e., there is no cross-process communication. The problem size of this benchmark is 171 MB consisting of 4 datasets. The output raster grid for each dataset has a consistent dimension of 1,440×720. The performance of the GPU cluster with K20 is projected based on the speedup of the single K20 vs. M2090 and we assume that the other specifications of the K20 GPU cluster is same to the Keeneland KIDS. From this figure, it can be found that all hybrid implementation can easily outperform the parallel implementation on CPU with GPU further better than MIC.

The input of the ISODATA is a high-resolution image of 18 GB with a dimension of 80,000×80,000 for three bands. The objective of this benchmark is to classify the image into 15 classes. For this benchmark, it can be found that the gap between the MIC processor and GPUs becomes quite small. One reason is that the FDR InfiniBand network on Beacon provides much higher bandwidth than the QDR InfiniBand network on Keeneland KIDS. The advantage of more efficient communication network on Beacon is further demonstrated when the number of participating processors is increased from 100 to 120.

In the Game of Life benchmark, the grid size is 32,768×32,768. The status of each cell in the grid will be updated for 100 iterations. By observing the performance results, it can be found that the strong scalability is demonstrated for MPI implementations on both CPUs and GPUs. For the MPI+MIC implementation, it is found that the performance does not scale quite well due to the communication overhead among MPI processes. Therefore, it is critical to keep a balance between computation and communication for achieving the best performance.

Conclusion

In our study, we have shown the potential for accelerating geospatial applications using parallel implementation on hybrid computer clusters. MPI+GPU and MPI+MIC parallel implementations of representative geospatial applications achieve significant performance improvement compared with the traditional MPI+CPU parallel. It is also found that the simple MPI-direct-host programming model on Intel MIC cluster can achieve a performance equivalent to the MPI+GPU model on GPU clusters when the same number of processors are allocated. An efficient cross-node communication network is still the key to achieve the strong scalability for parallel applications running on multiple nodes. In general, geospatial computation consists of the functional modules to process (1) vector geometric data, (2) network and graph data, (3) raster grid data, and (4) imagery data. A variety of research challenges remain in deploying heterogeneous computer architecture and systems to handle different data structure and geospatial computation problems in the future.

The paper on this research can be accessed at http://www.csce.uark.edu/~mqhuang/papers/2013_gis_hpcc.pdf.

Research Team Bios

Miaoqing Huang is an Assistant Professor at the Department of Computer Science and Computer Engineering, University of Arkansas. His research interests include operating system and infrastructure design for manycore computer system, hardware acceleration technologies (such as FPGA and GPU), and on-board cache design in nonvolatile memory-based solid-state drives (SSDs). He earned his doctoral degree in computer engineering from The George Washington University in 2009. He can be reached at mqhuang@uark.edu.

Xuan Shi is an Assistant Professor at the Department of Geosciences, University of Arkansas. His research interests include Geoinformatics, Geospatial Cyberinfrastructure, High performance geocomputation among others. He earned his doctoral degree in geography from the West Virginia University in 2007. He can be reached at xuanshi@uark.edu.

Haihang You is a Computational Scientist at the National Institute for Computational Sciences, University of Tennessee. Prior of joining NICS, he was a research associate at Innovative Computing Laboratory, Dept. of Electrical Engineering and Computer Science, University of Tennessee. His research interests are High Performance Computing, Performance Analysis and Evaluation, Compiler & Automatic Tuning and Optimization System, Linear Algebra, Iterative Adaptive Discontinuous Galerkin Finite Element Methods, Parallel I/O Tuning on Lustre and System Utilization Analysis and Improvement on a Supercomputer. He can be reached at hyou@utk.edu.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penalties to HPC applications. Even as these patches are rolled o Read more…

By Pete Beckman

Intel Touts Silicon Spin Qubits for Quantum Computing

February 14, 2018

Debate around what makes a good qubit and how best to manufacture them is a sprawling topic. There are many insistent voices favoring one or another approach. Referencing a paper published today in Nature, Intel has offe Read more…

By John Russell

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

HPE Extreme Performance Solutions

Safeguard Your HPC Environment with the World’s Most Secure Industry Standard Servers

Today’s organizations operate in an environment with ever-evolving threats, and in order to protect themselves they must continuously bolster their security strategy. Hewlett Packard Enterprise (HPE) and Intel® are addressing modern security challenges with the world’s most secure industry standard servers powered by the latest generation of Intel® Xeon® Scalable processors. Read more…

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended to make it easier, faster and cheaper to train and run machi Read more…

By Doug Black

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Singularity HPC Container Start-Up – Sylabs – Emerges from Stealth

February 8, 2018

The driving force behind Singularity, the popular HPC container technology, is bringing the open source platform to the enterprise with the launch of a new vent Read more…

By George Leopold

Dell EMC Debuts PowerEdge Servers with AMD EPYC Chips

February 6, 2018

AMD notched another EPYC processor win today with Dell EMC’s introduction of three PowerEdge servers (R6415, R7415, and R7425) based on the EPYC 7000-series p Read more…

By John Russell

‘Next Generation’ Universe Simulation Is Most Advanced Yet

February 5, 2018

The research group that gave us the most detailed time-lapse simulation of the universe’s evolution in 2014, spanning 13.8 billion years of cosmic evolution, is back in the spotlight with an even more advanced cosmological model that is providing new insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed, and where magnetic fields originate. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

SC17: Singularity Preps Version 3.0, Nears 1M Containers Served Daily

November 1, 2017

Just a few months ago about half a million jobs were being run daily using Singularity containers, the LBNL-founded container platform intended for HPC. That wa Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This