Visit additional Tabor Communication Publications
February 07, 2013
The top research stories of the week have been hand-selected from prominent science journals and leading conference proceedings. Here's another diverse set of items, including GPGPU programming challenges, checkpoint-restart for HPC cloud applications, novel methods for detecting concurrency errors, some distributed computing primers, and the latest patent applications.
Improving GPGPU Concurrency
A team of computer scientists from the Supercomputer Education and Research Centre at the Indian Institute of Science in Bangalore, India, have written a paper [PDF] describing their efforts to achieve improved GPGPU concurrency with elastic kernels.
Researchers Sreepathi Pai, Matthew J. Thazhuthaveetil, and R. Govindarajan observe that each new generation of GPUs increases the resources available to GPGPU programs. They further note that GPU programming models, such as CUDA, were designed to scale to make use of these resources, but there is still significant under-utilization. Specifically, the Parboil2 suite used for their work utilizes only 20–70% of resources on average. They set out to examine how GPUs that support concurrent execution of kernels can lead to gains in the utilization rate.
"In this work, we study concurrent execution of GPU kernels using multiprogram workloads on current NVIDIA Fermi GPUs. On two-program workloads from the Parboil2 benchmark suite we ﬁnd concurrent execution is often no better than serialized execution. We identify that the lack of control over resource allocation to kernels is a major serialization bottleneck. We propose transformations that convert CUDA kernels into elastic kernels which permit ﬁne-grained control over their resource usage."
The researchers identify several possible solutions and evaluate those proposals on real hardware. Using multiprogrammed workloads constructed from benchmarks in the Parboil 2 suite, they demonstrate a 1.21x increase in system throughput and a 3.73x increase average normalized turnaround time (ANTT) for two-program workloads. Naturally, this short summary does not do justice to the complexity of the work involved, but the authors' writing is clear and detailed and should be a welcome addition to the growing body of GPGPU research.
Virtual disk based checkpoint-restart for HPC cloud applications
Noted researchers Bogdan Nicolae (IBM Ireland) and Franck Cappello (INRIA) have published a paper in the February issue of the Journal of Parallel and Distributed Computing describing a novel virtual disk based Checkpoint-restart mechanism for HPC applications on IaaS clouds.
While cloud computing is making inroads into industry and academia as an alternative platform for running HPC applications on bare metal, on-premise systems, there are still important barriers to be addressed. The authors write that "the need to provide fault tolerance, support for suspend-resume and offline migration, an efficient Checkpoint-Restart mechanism becomes paramount in this context."
To meet this challenge, they propose BlobCR, which they refer to as "a dedicated checkpoint repository that is able to take live incremental snapshots of the whole disk attached to the virtual machine (VM) instances." The role of BlobCR is to reduce the performance overhead of checkpointing by persisting VM disk snapshots asynchronously in the background using a low overhead technique.
The mechanism supports both application-level and process-level checkpointing, as well as roll back file system changes. The authors carried out large-scale experiments, achieving positive results both for artificial settings and for an actual HPC application.
An Efficient Method for Detecting Concurrency Errors
With the advance of multicore and multi-threaded processors, concurrent programs have become more prevalent to take advantage of these capabilities. But errors are more likely in concurrent code, and conventional error detection methods do not scale well. As a result, concurrency errors, such as data races and deadlock, are a growing cause of system faults.
The problem has attracted the interest of group of computer scientists from State Key Laboratory of Software Engineering, School of Computers, Wuhan University. Their work explores methods for improving the trustworthiness of concurrent programs and points to a novel and efficient method for detecting concurrency errors in object-oriented programs that relies on static and dynamic analysis. Their paper, "An efficient method for detecting concurrency errors in object-oriented programs," appears in the 2012 (12) issue of Science China, Vol 55.
The three member team argue that: "Implementation and promotion of this work will increase development efficiency of concurrency software, and improve the dependability of concurrent systems. It will contribute greatly towards reducing the complexity of concurrency error detection, thereby reducing manual overhead and economic cost."
Concurrent, parallel and distributed computation for beginners (or their teachers)
While much of computing education research is built on introductory-level subjects, the ACM Transactions on Computing Education (TOCE) Journal devotes an entire issue to the other end of the spectrum, learning advanced subjects, identified here as "concurrent, parallel and distributed computation."
The issue includes four approaches for teaching these subjects that is in keeping with typical university budgets. The proposed lessons cover MapReduce in a cloud, a network of gaming consoles, remote computing on a multicore system, and software modeling using formal specifications.
The first article describes the experiences of a group of researchers teaching MapReduce in a large undergraduate lecture course using public cloud services and the standard Hadoop API. The second explores an innovative method for designing an affordable high-performance cluster system using the PlayStation 3 (PS3). The third uses the major educational operating system, called Xipx, to provide students with system programming experience in a distributed message-passing environment. While the final presentation is basically an undergraduate course on concurrent programming using formal specifications in different stages of the learning process.
The Week in HPC Patents
Ok, so we haven't really combed the patent office files for an exhaustive list of relevant HPC patents, but there were a couple noteworthy ones that came across my electronic transom, to wit:
1) System and Method for Improving the Performance of High Performance Computing Applications on Cloud Using Integrated Load Balancing (United States Patent Application 20130031545)
The five-person team of inventors (four from India and one from the United States) note that "effective optimization of the load assignments on the Cloud needs to take into account the High Performance Computing (HPC) application task requirements as well as the computational capacity and communication bandwidth of the Cloud resources. This disclosure proposes an approach for two-way transfer of the essential information between Cloud and HPC applications that result in better load assignment without violating network privacy."
2) Automatically Routing Super-Compute Interconnects (United States Patent Application 20130031334)
The three-member Austin, Texas-based team write that the "application relates generally to an improved data processing apparatus and method and more specifically to mechanisms for automatically routing super-compute interconnects."
Most of the patent verbiage is rather obtuse, but the extremely-detailed drawings, as exemplified by figure 1 below, may offer assistance in parsing these weighty documents:
Jun 18, 2013 |
The world's largest supercomputers, like Tianhe-2, are great at traditional, compute-intensive HPC workloads, such as simulating atomic decay or modeling tornados. But data-intensive applications--such as mining big data sets for connections--is a different sort of workload, and runs best on a different sort of computer.
Jun 18, 2013 |
Researchers are finding innovative uses for Gordon, the 285 teraflop supercomputer housed at the San Diego Supercomputer Center (SDSC) that has a unique Flash-based storage system. Since going online, researchers have put the incredibly fast I/O to use on a wide variety of workloads, ranging from chemistry to political science.
Jun 17, 2013 |
The advent of low-power mobile processors and cloud delivery models is changing the economics of computing. But just as an economy car is good at different things than a full size truck, an HPC workload still has certain computing demands that neither the fastest smartphone nor the most elastic cloud cluster can fulfill.
Jun 14, 2013 |
For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.
Jun 13, 2013 |
Titan, the Cray XK7 at the Oak Ridge National Lab that debuted last fall as the fastest supercomputer in the world with 17.59 petaflops of sustained computing power, will rely on its previous LINPACK test for the upcoming edition of the Top 500 list.
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.