GPU Computing Ushers in Progress
In the future, 2010 may be known as the year of the GPU, or at least its big debut. China stole TOP500 glory using the massively parellel processing power of the graphics chip. And while the US can claim no GPU-based supercomputers among the top 10, GPGPU computing is having a big influence on US science and research.
In a piece over at Scientific Computing, Rob Farber examines the growing popularity of GPU computing. As a senior research scientist at Pacific Northwest National Laboratory, Farber has a good vantage point to see how the evolution of computing technology affects science on the ground. Farber argues that multi-threaded and GPGPU technology are changing the dynamics of scientific computing, delivering fresh opportunities into the realms of academia, product development and HPC research. In particular, GPGPU computing has made it possible to do more science with fewer or cheaper resources.
Graphic processors have matured into general purpose computational devices at exactly the right time to be considered in this industry-wide retooling to utilize multi-threaded parallelism. To put this in very concrete terms, any teenager (or research effort) from Beijing, China, to New Delhi, India, can purchase a teraflop-capable graphics processor and start developing and testing massively parallel applications.
While it’s no secret that multicore hardware requires applications that can harness that power, the fact is that hardware is way out in front with software struggling to catch up. Lest that disconnect continue to be a major blight on scientific progress, Farber doles out this cautionary advice:
Legacy applications and research efforts that do not invest in multi-threaded software will not benefit from modern multi-core processors, because single-threaded and poorly scaling software will not be able to utilize extra processor cores. As a result, computational performance will plateau at or near current levels, placing the projects that depend on these legacy applications at risk of both stagnation and loss of competitiveness.
Still, Farber predicts that HPC will experience tremendous progress as the next generation of software developers master the challenges associated with massively-parallel programming. Multicore-aware software is the key that will unlock the full potential of multicore hardware. Hardware that is already here. Farber notes that major HPC vendors have developed or are in the process of developing hybrid systems that can take advantage of the parellel nature of GPUs. Many, if not most, supercomputing centers are themselves evaluating hybrid CPU-GPU architectuers, among them Tokyo Tech, Oak Ridge National Laboratory (ORNL), National Energy Research Scientific Computing Center (NERSC) and PNNL.
Full story at Scientific Computing