Tag: heterogeneous computing
The Weekly Top Five features the five biggest HPC stories of the week, condensed for your reading pleasure. This week, we cover the Cray/Sandia partership to found a knowledge institute; RenderStream’s FireStream-based workstations and servers; NVIDIA’s latest CUDA centers; Reservoir Labs and Intel’s extreme scale ambitions; and Jülich Supercomputing Centre’s new hybrid cluster.
Adopting the ARM architecture would be a leap of faith for the x86 chip vendor, but perhaps a necessary one.
The days of PCI-attached discrete GPUs are numbered.
In an HPC market that seems determined to go down the CPU-GPU path, upstart Convey Computer may yet offer a few surprises. The company today unveiled the sequel to its HC-1 platform it introduced in 2008. Called the HC-1ex, the new system adds a lot more performance and capability, but retains the original x86-FPGA co-processor design.
Nvidia Fellow David Kirk takes a swipe at Intel’s heterogeneous computing plans.
Russian HPC cluster vendor T-Platforms says it will be adding NVIDIA’s Tesla 20-series (Fermi-class) GPUs into its latest blade offering. According to the company, the GPGPU blade will feature a “very high computing density design along with aggressive power-saving schemes for heterogeneous environments.”
The future of supercomputer design seems to be heading toward using multiple types of processors in a single system.
The second wave of GPGPU software development tools is upon us. New tools from The Portland Group Inc. (PGI) and French-based CAPS Enterprise enable everyday C and Fortran programmers to tap into GPU acceleration within an integrated heterogeneous computing environment.
If you are familiar with current approaches to programming accelerators, you are either discomforted by the complexities, or excited at the levels of control you can get. Can we come up with a different model of GPU and accelerator programming — a model that allows HPC programmers to focus on domain science instead of on computer science?