The Week in HPC Research

By Tiffany Trader

March 7, 2013

The top research stories of the week have been hand-selected from prominent journals and leading conference proceedings. Here’s another diverse set of items, including novel methods of data race detection; a comparison of predictive laws; a review of FPGA’s promise; GPU virtualization using PCI Direct pass-through; and an analysis of the Amazon Web Services High-IO platform.

Scalable Data Race Detection

A team of researchers from Berkeley Lab and the University of California Berkeley are investigating cutting-edge programming languages for HPC. These are languages that promote hybrid parallelism and shared memory abstractions using a global address space. It’s a programming style that is especially prone to data races that are difficult to detect, and prior work in the field has demonstrated 10X-100X slowdowns for non-scientific programs.

In a recent paper, the computer scientists present what they say is “the first complete implementation of data race detection at scale for UPC programs.” UPC stands for Unified Parallel C, an extension of the C programming language developed by the HPC community for large-scale parallel machines. The implementation used by the Berkeley-based team tracks local and global memory references in the program. It employs two methods for reducing overhead 1) hierarchical function and instruction level sampling; and 2) exploiting the runtime persistence of aliasing and locality specific to Partitioned Global Address Space applications.

Experiments show that the best results are attained when both techniques are used in tandem. “When applying the optimizations in conjunction our tool finds all previously known data races in our benchmark programs with at most 50% overhead,” the researchers state. “Furthermore, while previous results illustrate the benefits of function level sampling, our experiences show that this technique does not work for scientific programs: instruction sampling or a hybrid approach is required.”

Their work is published in the Proceedings of the 18th ACM SIGPLAN symposium on Principles and Practice of Parallel Programming.

Next >>

Predicting the Progress of Technology

A fascinating new study applies the scientific method to some of our most popular predictive models. A research team from MIT and the Santa Fe Institute compared several different approaches for predicting technological improvement – including Moore’s Law and Wright’s Law – to known cases of technological progress using past performance data from different industries.

Moore’s Law, theorized by Intel co-founder Gordon Moore in 1965, predicts that a chip’s transistor count will double every 18 months. In more general terms, it suggests that technologies advance exponentially with time. Wright’s Law was first formulated by Theodore Wright in 1936. Also called the Rule of Experience, it holds that progress increases with experience. Other alternative models were proposed by Goddard, Sinclair et al., and Nordhaus.

The study, which employed hindcasting, used a statistical model to rank the performance of the postulated laws. The comparison data came from a database on the cost and production of 62 different technologies. The expansive knowledge-base enabled researchers to test six different prediction principles against real-world data.

The results revealed that the law with the greatest accuracy was Wright’s Law, but Moore’s Law was a very close second. In fact, the laws themselves are more similar than previously realized.

“We discover a previously unobserved regularity that production tends to increase exponentially,” write the authors. “A combination of an exponential decrease in cost and an exponential increase in production would make Moore’s law and Wright’s law indistinguishable…. We show for the first time that these regularities are observed in data to such a degree that the performance of these two laws is nearly the same.”

“Our results show that technological progress is forecastable, with the square root of the logarithmic error growing linearly with the forecasting horizon at a typical rate of 2.5% per year,” they conclude.

The team includes Bela Nagy of the Santa Fe Institute, J. Doyne Farmer of the University of Oxford and the Santa Fe Institute, Quan Bui of St. John’s College in Santa Fe, NM, and Jessika E. Trancik of the Santa Fe Institute and MIT. Their findings are published in the online open-access journal PLOS ONE.

Next >>

FPGA Programming for the Masses

FPGAs (field programmable gate arrays) have been around for many years and show real potential for advancing HPC, but their popularity has been restricted because they are difficult to work with. This is the assertion of a group of researchers from the T.J. Watson Research Center. They argue that FPGAs won’t become mainstream until their various programmability challenges are addressed.

In a paper published last month in ACM Queue, the research team observes that there exists a spectrum of architectures, with general-purpose processors at one end and ASICs (application-specific integrated circuits) on the other. Architectures like PLDs (programmable logic devices), they argue, have that best-of-both-worlds potential in that they are closer to the hardware and can be reprogrammed. The most prominent PLD is in fact an FPGA.

The authors write:

FPGAs were long considered low-volume, low-density ASIC replacements. Following Moore’s law, however, FPGAs are getting denser and faster. Modern-day FPGAs can have up to 2 million logic cells, 68 Mbits of BRAM, more than 3,000 DSP slices, and up to 96 transceivers for implementing multigigabit communication channels. The latest FPGA families from Xilinx and Altera are more like an SoC (system-on-chip), mixing dual-core ARM processors with programmable logic on the same fabric. Coupled with higher device density and performance, FPGAs are quickly replacing ASICs and ASSPs (application-specific standard products) for implementing fixed function logic. Analysts expect the programmable IC (integrated circuit) market to reach the $10 billion mark by 2016.

The researchers note that “despite the advantages offered by FPGAs and their rapid growth, use of FPGA technology is restricted to a narrow segment of hardware programmers. The larger community of software programmers has stayed away from this technology, largely because of the challenges experienced by beginners trying to learn and use FPGAs.”

The rest of this excellent paper addresses the various challenges in detail and brings attention to the lack of support for device drivers, programming languages, and tools. The authors drive home the point that the community will only be able to leverage the benefits of FPGAs if the programming aspects are improved.

Next >>

GPU Virtualization using PCI Direct Pass-Through

The technical computing space has seen several trends develop over the past decade, among them are server virtualization, cloud computing and GPU computing. It’s clear that GPGPU computing has a role to play in HPC systems. Can these trends be combined? A research team from Chonbuk National University in South Korea has written a paper in the periodical Applied Mechanics and Materials, proposing exactly this. The investigate a method of GPU virtualization that exploits the GPU in a virtualized cloud computing environment.

The researchers claim their approach is different from previous work, which mostly reimplemented GPU programming APIs and virtual device drivers. Past research focused on sharing the GPU among virtual machines, which increased virtualization overhead. The paper describes an alternate method: the use of PCI direct pass-through.

“In our approach, bypassing virtual machine monitor layer with negligible overhead, the mechanism can achieve similar computation performance to bare-metal system and is transparent to the GPU programming APIs,” the authors write.

Next >>

Analysis of I/O Performance on AWS High I/O Platform

The HPC community is still exploring the potential of the cloud paradigm to discern the most suitable use cases. The pay-per-use basis of compute and storage resources is an attractive draw for researchers, but so is the illusion of limitless resources to tackle large-scale scientific workloads.

In the most recent edition of the Journal of Grid Computing, computer scientists from the Department of Electronics and Systems at the University of A Coruña in Spain evaluate the I/O storage subsystem on the Amazon EC2 platform, specifically the High I/O instance type, to determine its suitability for I/O-intensive applications. The High I/O instance type, released in July 2012, is backed by SSD and also provides high levels of CPU, memory and network performance.

The study looked at the low-level cloud storage devices available in Amazon EC2, ephemeral disks and Elastic Block Store (EBS) volumes, both on local and distributed file systems. It also assessed several I/O interfaces, notably POSIX, MPI-IO and HDF5, that are commonly employed by scientific workloads. The scalability of a representative parallel I/O code was also analyzed based on performance and cost.

As the results show, cloud storage devices have different performance characteristics and usage constraints. “Our comprehensive evaluation can help scientists to increase significantly (up to several times) the performance of I/O-intensive applications in Amazon EC2 cloud,” the researchers state. “An example of optimal configuration that can maximize I/O performance in this cloud is the use of a RAID 0 of 2 ephemeral disks, TCP with 9,000 bytes MTU, NFS async and MPI-IO on the High I/O instance type, which provides ephemeral disks backed by Solid State Drive (SSD) technology.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Topological Quantum Superconductor Progress Reported

February 20, 2018

Overcoming sensitivity to decoherence is a persistent stumbling block in efforts to build effective quantum computers. Now, a group of researchers from Chalmers University of Technology (Sweden) report progress in devisi Read more…

By John Russell

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penalties to HPC applications. Even as these patches are rolled o Read more…

By Pete Beckman

Intel Touts Silicon Spin Qubits for Quantum Computing

February 14, 2018

Debate around what makes a good qubit and how best to manufacture them is a sprawling topic. There are many insistent voices favoring one or another approach. Referencing a paper published today in Nature, Intel has offe Read more…

By John Russell

HPE Extreme Performance Solutions

Safeguard Your HPC Environment with the World’s Most Secure Industry Standard Servers

Today’s organizations operate in an environment with ever-evolving threats, and in order to protect themselves they must continuously bolster their security strategy. Hewlett Packard Enterprise (HPE) and Intel® are addressing modern security challenges with the world’s most secure industry standard servers powered by the latest generation of Intel® Xeon® Scalable processors. Read more…

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

Fluid HPC: How Extreme-Scale Computing Should Respond to Meltdown and Spectre

February 15, 2018

The Meltdown and Spectre vulnerabilities are proving difficult to fix, and initial experiments suggest security patches will cause significant performance penal Read more…

By Pete Beckman

Brookhaven Ramps Up Computing for National Security Effort

February 14, 2018

Last week, Dan Coats, the director of Director of National Intelligence for the U.S., warned the Senate Intelligence Committee that Russia was likely to meddle in the 2018 mid-term U.S. elections, much as it stands accused of doing in the 2016 Presidential election. Read more…

By John Russell

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

The Food Industry’s Next Journey — from Mars to Exascale

February 12, 2018

Global food producer and one of the world's leading chocolate companies Mars Inc. has a unique perspective on the impact that exascale computing will have on the food industry. Read more…

By Scott Gibson, Oak Ridge National Laboratory

Singularity HPC Container Start-Up – Sylabs – Emerges from Stealth

February 8, 2018

The driving force behind Singularity, the popular HPC container technology, is bringing the open source platform to the enterprise with the launch of a new vent Read more…

By George Leopold

Dell EMC Debuts PowerEdge Servers with AMD EPYC Chips

February 6, 2018

AMD notched another EPYC processor win today with Dell EMC’s introduction of three PowerEdge servers (R6415, R7415, and R7425) based on the EPYC 7000-series p Read more…

By John Russell

‘Next Generation’ Universe Simulation Is Most Advanced Yet

February 5, 2018

The research group that gave us the most detailed time-lapse simulation of the universe’s evolution in 2014, spanning 13.8 billion years of cosmic evolution, is back in the spotlight with an even more advanced cosmological model that is providing new insights into how black holes influence the distribution of dark matter, how heavy elements are produced and distributed, and where magnetic fields originate. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Leading Solution Providers

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

V100 Good but not Great on Select Deep Learning Aps, Says Xcelerit

November 27, 2017

Wringing optimum performance from hardware to accelerate deep learning applications is a challenge that often depends on the specific application in use. A benc Read more…

By John Russell

SC17: Singularity Preps Version 3.0, Nears 1M Containers Served Daily

November 1, 2017

Just a few months ago about half a million jobs were being run daily using Singularity containers, the LBNL-founded container platform intended for HPC. That wa Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This