With exascale presenting a much larger challenge than previous exponential computing markers, an integrated, collaborative approach is all the more necessary. While concerted funding efforts for extreme-scale computing came a bit later than many had hoped, there are several international efforts afoot currently, including the European project, DEEP. DEEP, which stands for Dynamical ExaScale Entry
Intel’s Many Integrated Core (MIC) architecture was designed to accommodate highly-parallel applications, a great many of which rely on the Message Passing Interface (MPI) standard. Applications deployed on Intel Xeon Phi coprocessors may use offload programming, an approach similar to the CUDA framework for general purpose GPU (GPGPU) computing, in which the CPU-based application is
Since the first details about the MIC architecture emerged, Intel has continually harkened back to their vision of offering a high degree of parallelism inside a power efficient package that could promise programmability. With the eventual entry of the next generation Xeon Phi hitting the market in years to come with its (still unstated) high
The global distributed computing system known as the Worldwide LHC Computing Grid (WLCG) brings together resources from more than 150 computing centers in nearly 40 countries. Its mission is to store, distribute and analyze the 25 petabytes of data generated each year by the Large Hadron Collider (LHC), based out of the European Laboratory for Particle Physics (CERN) in Geneva, Switzerland.
It’s been nearly a year since the Intel Xeon Phi Coprocessor debuted at SC12, and in that time, it has experienced strong acceptance from the community. But as this is a relatively new technology, research into its usefulness is still forthcoming. Adding to the growing body of research on the Phi is “Understanding the Costs
This week we spoke with Jörg Lotze, CTO and cofounder of financial services-driven software firm, Xcelerit, about benchmarking accelerators, coprocessors, and multicore architectures with specific emphasis on how GPUs stack up against Intel Xeon Phi coprocessors. Lotze discussed the challenges and opportunities of each in the context of real-world Monte Carlo examples.
Today IBM announced NextScale, which will eventually evolve into the place of its iDataPlex systems. Tapping the power of the new Ivy Bridge processors, coupled with eventual support for a host of accelerated options (GPUs, Xeon Phi and likely other processor choices) the company also put its stake in the ground for hyperscale and HPC..
Iowa State has taken delivery of its most powerful supercomputer yet. The 4,768 core “Cyence” HPC cluster – accelerated by NVIDIA GPUs and Intel Xeon Phis – is the touchstone of a $2.6 million project to revitalize HPC-based research at Iowa State…
Following Intel’s low-key announcement today of updates to Cluster Studio and Parallel Studio, we spoke with its software and tools guru, James Reinders, about what these enhancements mean for Xeon Phi, Fortran development, and the advancement along standards lines, including OpenMP 4.0 and….
This week at the Intel “Reimagine the Datacenter” event in San Francisco, we talked with the company’s HPC lead, Raj Hazra about the general themes that emerged during a series of presentations around efficiency, performance and a new approach to integration across the stack. While not an HPC-oriented set of announcements, Hazra said…