Emerging Architectures Boost Geospatial Application Performance

By Chenggang Lai, Miaoqing Huang, Xuan Shi, and Haihang You

January 23, 2014

Geospatial data is critical in a variety of applications – including transportation planning, hydrological network and watershed analysis, environmental modeling and surveillance, emergency response, and military operations. As the availability of geospatial data has expanded, its volume has accelerated, creating a variety of challenges and complexities that render traditional desktop-based geographical information systems (GIS) and remote-sensing software incapable of providing the requisite processing power.

Intel’s Many Integrated Core (MIC) architecture and the graphics processing unit (GPU) employ parallelism to achieve scalability with high performance for data-intensive computing over high-resolution spatial data. Our research has demonstrated that hybrid computer clusters equipped with the latest Intel MIC processors and NVIDIA GPUs can achieve a significant performance improvement for a range of typical geospatial applications, with Kriging interpolation, ISODATA, and Cellular Automata as examples. Details of our study are contained in a paper titled “Accelerating Geospatial Applications on Hybrid Architectures” in the proceedings of the 2013 IEEE International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing. The co-authors of the paper were Chenggang Lai, Miaoqing Huang, and Xuan Shi of the University of Arkansas, and Haihang You of the National Institute for Computational Sciences.

Coprocessor architecture

GPU architecture has been evolving for many years. Nvidia is a case in point, having gone through many generations, from G80 to GT200, Fermi, and today’s Kepler. The Kepler GPU architecture contains 15 streaming multiprocessors (SMXes), each of which consists of 192 single-precision cores and 64 double-precision cores. The Kepler architecture provides three advanced features to efficiently share the GPU resources among multiple host threads or processes (i.e., Hyper-Q), flexibly create new kernels on a GPU (i.e., dynamic parallelism), and reduce communication overhead across GPUs through GPUDirect. GPUs are normally used as accelerators in high-performance computer clusters. In a typical MPI-based parallel application, the MPI process executes on a host CPU that in turn allocates the computation to one or more client GPUs.

figure.1.kepler-architecture

NVIDIA’s Kepler GPU architecture. Image source: Lai et al., “Accelerating Geospatial Applications on Hybrid Architectures,” Proceedings of the 2013 IEEE International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing, 1545–1552, 2013.

The first commercially available Intel coprocessor based on MIC architecture is Xeon Phi. It contains up to 61 scalar processors with vector processing units. Direct communication between MIC coprocessors across different nodes is also supported through MPI. The following images show two approaches to parallelizing applications on computer clusters equipped with MIC processors. The first approach is to treat the MIC processors as clients to the host CPUs. The MPI processes will be hosted by CPUs, which will offload the computation to the MIC processors. Multithreading programming models such as OpenMP can be used to allocate many cores for data processing. The second approach is to let each MIC core directly host one MPI process. In this way, the 60 cores on the same die are treated as 60 independent processors while sharing the 8 GB on-board memory on the Xeon Phi 5110P.

figure.2.MIC_Use1.offloading

Offloading approach to implementing parallelism on the MIC cluster. Image source: Lai et al., “Accelerating Geospatial Applications on Hybrid Architectures,” Proceedings of the 2013 IEEE International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing, 1545–1552, 2013.

figure.3.MIC_Use2.directhost

Direct-host approach to implementing parallelism on the MIC cluster. Image source: Lai et al., “Accelerating Geospatial Applications on Hybrid Architectures,” Proceedings of the 2013 IEEE International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing, 1545–1552, 2013.

Benchmarks

Three different types of use case served as the benchmarks for this study: Kriging interpolation (embarrassingly parallelism), the Iterative Self-organizing Data-analysis Technique Algorithm (ISODATA) (loose communication in the computation), and Cellular Automata (intense communication).

Kriging is a geostatistical estimator that infers the value of a random field at an unobserved location, and can be viewed as a point interpolation that reads input point data and returns a raster grid with calculated estimations for each cell.

ISODATA is one of the most frequently used algorithms for unsupervised image classification algorithms in remote sensing applications. In general, it can be implemented in three steps: (1) calculate the initial mean value of each class; (2) classify each pixel to the nearest class; and (3) calculate the new class means based on all pixels in one class. The second and third steps are repeated until the change between two iterations is small enough. When multiple processors are used, only one summation from all processors is required in each iteration.

Cellular Automata are commonly used in a variety of geospatial modeling and simulation. Game of Life (GOL), invented by British mathematician John Conway, is a well-known generic Cellular Automaton that consists of a collection of cells that can live, die or multiply based on a few mathematical rules. The universe of the GOL is a two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, alive (‘1’) or dead (‘0’). Every cell interacts with its eight neighbors, which are the cells that are horizontally, vertically, or diagonally adjacent.

Experiment setup

We conducted our experiments on two platforms, the National Science Foundation-sponsored Keeneland supercomputer and Beacon supercomputer. Keeneland Initial Delivery System (KIDS) is a 201 Teraflop, 120-node HP SL390 system with 240 Intel Xeon X5660 CPUs and 360 Nvidia Fermi GPUs, with the nodes connected by a QDR InfiniBand network. Each node has two 6-core 2.8 GHz Xeon CPUs and 3 Tesla M2090 GPUs. The Nvidia M2090 GPU contains 512 CUDA cores and 6 GB GDDR5 on-board memory. The Beacon system (a Cray CS300-AC Cluster Supercomputer) offers access to 48 compute nodes and 6 I/O nodes joined by an FDR InfiniBand interconnect providing 56 Gb/s of bi-directional bandwidth. Each compute node is equipped with 2 Intel Xeon E5-2670 8-core 2.6 GHz processors, 4 Intel Xeon Phi (MIC) coprocessors 5110P, 256 GB of RAM, and 960 GB of SSD storage. Each I/O node provides access to an additional 4.8 TB of SSD storage. For each benchmark, we had three parallel implementations on two clusters. i.e., MPI+CPU, MPI+MIC, MPI+GPU.

Results

figure.4.a.kriging-300xfigure.4.b.isodata-300xfigure.4.c.gol_32768-300x

Performance of benchmarks on four different configurations: (a) Kriging, (b) ISODATA, (c) GOL. Image source: Lai et al., “Accelerating Geospatial Applications on Hybrid Architectures,” Proceedings of the 2013 IEEE International Conference on High Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing, 1545–1552, 2013.

We want to show the strong scalability of the parallel implementations. Therefore, the problem size is fixed for each benchmark while the number of participating MPI processes is increased.

In the Kriging interpolation benchmark, the source dataset is evenly partitioned among all MPI processes along the row-major. The computation in each MPI process is purely local, i.e., there is no cross-process communication. The problem size of this benchmark is 171 MB consisting of 4 datasets. The output raster grid for each dataset has a consistent dimension of 1,440×720. The performance of the GPU cluster with K20 is projected based on the speedup of the single K20 vs. M2090 and we assume that the other specifications of the K20 GPU cluster is same to the Keeneland KIDS. From this figure, it can be found that all hybrid implementation can easily outperform the parallel implementation on CPU with GPU further better than MIC.

The input of the ISODATA is a high-resolution image of 18 GB with a dimension of 80,000×80,000 for three bands. The objective of this benchmark is to classify the image into 15 classes. For this benchmark, it can be found that the gap between the MIC processor and GPUs becomes quite small. One reason is that the FDR InfiniBand network on Beacon provides much higher bandwidth than the QDR InfiniBand network on Keeneland KIDS. The advantage of more efficient communication network on Beacon is further demonstrated when the number of participating processors is increased from 100 to 120.

In the Game of Life benchmark, the grid size is 32,768×32,768. The status of each cell in the grid will be updated for 100 iterations. By observing the performance results, it can be found that the strong scalability is demonstrated for MPI implementations on both CPUs and GPUs. For the MPI+MIC implementation, it is found that the performance does not scale quite well due to the communication overhead among MPI processes. Therefore, it is critical to keep a balance between computation and communication for achieving the best performance.

Conclusion

In our study, we have shown the potential for accelerating geospatial applications using parallel implementation on hybrid computer clusters. MPI+GPU and MPI+MIC parallel implementations of representative geospatial applications achieve significant performance improvement compared with the traditional MPI+CPU parallel. It is also found that the simple MPI-direct-host programming model on Intel MIC cluster can achieve a performance equivalent to the MPI+GPU model on GPU clusters when the same number of processors are allocated. An efficient cross-node communication network is still the key to achieve the strong scalability for parallel applications running on multiple nodes. In general, geospatial computation consists of the functional modules to process (1) vector geometric data, (2) network and graph data, (3) raster grid data, and (4) imagery data. A variety of research challenges remain in deploying heterogeneous computer architecture and systems to handle different data structure and geospatial computation problems in the future.

The paper on this research can be accessed at http://www.csce.uark.edu/~mqhuang/papers/2013_gis_hpcc.pdf.

Research Team Bios

Miaoqing Huang is an Assistant Professor at the Department of Computer Science and Computer Engineering, University of Arkansas. His research interests include operating system and infrastructure design for manycore computer system, hardware acceleration technologies (such as FPGA and GPU), and on-board cache design in nonvolatile memory-based solid-state drives (SSDs). He earned his doctoral degree in computer engineering from The George Washington University in 2009. He can be reached at mqhuang@uark.edu.

Xuan Shi is an Assistant Professor at the Department of Geosciences, University of Arkansas. His research interests include Geoinformatics, Geospatial Cyberinfrastructure, High performance geocomputation among others. He earned his doctoral degree in geography from the West Virginia University in 2007. He can be reached at xuanshi@uark.edu.

Haihang You is a Computational Scientist at the National Institute for Computational Sciences, University of Tennessee. Prior of joining NICS, he was a research associate at Innovative Computing Laboratory, Dept. of Electrical Engineering and Computer Science, University of Tennessee. His research interests are High Performance Computing, Performance Analysis and Evaluation, Compiler & Automatic Tuning and Optimization System, Linear Algebra, Iterative Adaptive Discontinuous Galerkin Finite Element Methods, Parallel I/O Tuning on Lustre and System Utilization Analysis and Improvement on a Supercomputer. He can be reached at hyou@utk.edu.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

O&G Companies Create Value with High Performance Remote Visualization

Today’s oil and gas (O&G) companies are striving to process datasets that have become not only tremendously large, but extremely complex. And the larger that data becomes, the harder it is to move and analyze it – particularly with a workforce that could be distributed between drilling sites, offshore rigs, and remote offices. Read more…

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Weekly Twitter Roundup (Feb. 16, 2017)

February 16, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

Alexander Named Dep. Dir. of Brookhaven Computational Initiative

February 15, 2017

Francis Alexander, a physicist with extensive management and leadership experience in computational science research, has been named Deputy Director of the Computational Science Initiative at the U.S. Read more…

Here’s What a Neural Net Looks Like On the Inside

February 15, 2017

Ever wonder what the inside of a machine learning model looks like? Today Graphcore released fascinating images that show how the computational graph concept maps to a new graph processor and graph programming framework it’s creating. Read more…

By Alex Woodie

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Cray Posts Best-Ever Quarter, Visibility Still Limited

February 10, 2017

On its Wednesday earnings call, Cray announced the largest revenue quarter in the company’s history and the second-highest revenue year. Read more…

By Tiffany Trader

HPC Cloud Startup Launches ‘App Store’ for HPC Workflows

February 9, 2017

“Civilization advances by extending the number of important operations which we can perform without thinking about them,” Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Leading Solution Providers

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

KNUPATH Hermosa-based Commercial Boards Expected in Q1 2017

December 15, 2016

Last June tech start-up KnuEdge emerged from stealth mode to begin spreading the word about its new processor and fabric technology that’s been roughly a decade in the making. Read more…

By John Russell

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This