Visit additional Tabor Communication Publications
September 13, 2012
NVIDIA is once again highlighting some the Kepler hardware goodies that will come with the K20 chip, which is due out later this year. A blog post penned by NVIDIA engineer Stephen Jones, discusses the new 'Dynamic Parallelism' feature of the K20, a processor aimed at the supercomputing market.
Dynamic Parallelism enables an application task (called a stream, in GPU parlance) to spawn other tasks inside the GPU. The feature is supported by the presence of a "grid management" unit on the chip that allows tasks to be launched, suspended and resumed natively. Prior GPUs forced the application to return control to the CPU host to start up another kernel task. Essentially this allows more of the application to execute continuously within the GPU without having to bounce back to the CPU.
Jones illustrates how this works with the classic Quicksort program. Basically Quicksort slices up a list of items into sub-lists, and sorts them individually. As Jones writes," It’s a classic 'divide-and-conquer' algorithm because it breaks the problem into ever smaller pieces and solves them recursively."
Prior to the K20 Kepler, the GPU would have had to return control to the CPU for each sub-list sort, and the application code would have to manage that back-and-forth rather inelegantly. With dynamic parallelism, the CUDA code is short and sweet, resembling a standard CPU-type implementation.
Not only should this make the programming the application more natural, but it also should increase performance since it eliminates much of the communication and context switching between the CPU and GPU. Also, since the grid management unit enables the tasks to be run concurrently on different parts of the GPU, even more performance can be extracted. Plus, it leaves more of the CPU free to do its own work, which can speed overall execution even further.
Any type of application that uses these divide-and-conquer algorithms can take advantage of this feature. It includes things like branch-and-bound algorithms, graph analytics, and adaptive meshing, to name a few. That encompasses a wide range of common HPC apps: CFD, mechanical stress analysis, weather codes, computational ray tracing -- actually the list is rather large.
The only real disadvantage is for existing CUDA codes that were developed on pre-K20 GPUs. In these cases, the programs would have to be tweaked to take advantage of the new feature. For developing or porting new codes, this feature looks to be a no-brainer.
Full story at NVIDIA website
Contributing commentator, Andrew Jones, offers a break in the news cycle with an assessment of what the national "size matters" contest means for the U.S. and other nations...
Today at the International Supercomputing Conference in Leipzing, Germany, Jack Dongarra presented on a proposed benchmark that could carry a bit more weight than its older Linpack companion. The high performance conjugate gradient (HPCG) concept takes into account new architectures for new applications, while shedding the floating point....
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products--all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company's Raj....
05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability.
04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?
Join our webinar to learn how IT managers can migrate to a more resilient, flexible and scalable solution that grows with the data center. Mellanox VMS is future-proof, efficient and brings significant CAPEX and OPEX savings. The VMS is available today.