Hyper-Q feature designed to make MPI run faster than ever before.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/knights_corner_small.JPG” alt=”” width=”105″ height=”87″ />As NVIDIA’s upcoming Kepler-grade Tesla GPU prepares to do battle with Intel’s Knight Corner, the companies are busy formulating their respective HPC accelerator stories. While NVIDIA has enjoyed the advantage of actually having products in the field to talk about, Intel has managed to capture the attention of some fence-sitters with assurances of high programmability, simple recompiles, and transparent scalability for its Many Integrated Core (MIC) coprocessors. But according to NVIDIA’s Steve Scott, such promises ignore certain hard truths about how accelerator-based computing really works.
This week Intel unveiled its upmarket version of its Cluster Studio offering aimed at performance-minded MPI application developers. Called Cluster Studio XE, the jazzed-up developer suite adds Intel analysis tools to make it easier for programmers to optimize and tune codes for maximum performance. It also includes the latest compilers, runtimes, and MPI library to keep pace with the new developments in parallel programming.
NERSC director Kathy Yelick shares insights on programming petascale systems like the Hopper system.
It was a bit of a surprise when QLogic beat out Mellanox as the interconnect vendor on the NNSA’s Tri-Lab Linux Capacity Cluster 2 contract. Not only was Mellanox the incumbent on the original Tri-Lab contract, but it is widely considered to have the more complete solution set for InfiniBand. Nevertheless, QLogic managed to win the day, and did so with somewhat unconventional technologies.
New generation of HPC programmers embracing higher level languages.
QLogic intros new pass-through module; Voltaire debuts MPI offload technology.
NVIDIA’s success with CUDA is no accident.
Can a solution for HPC software live within MPI, OpenMP, CUDA, OpenCL, and/or Ct?
Hybridizing MPI applications with CPU cores and GP-GPUs.