Tag: GPUs

OpenACC Reviews Latest Developments and Future Plans

Nov 11, 2015 |

This week during the lead up to SC15 the OpenACC standards group announced several new developments including the release and ratification of the 2.5 version of the OpenACC API specification, member support for multiple new OpenACC targets, and other progress with the standard. “The 2.5 specification addresses an essential challenge of profiling code where a Read more…

PGI Accelerator Compilers Add OpenACC Support for x86 Multicore CPUs

Oct 29, 2015 |

NVIDIA today announced availability of its newest PGI Accelerator Fortran, C and C++ compilers (version 15.10) now with support for OpenACC directives-based parallel programming standard on x86 architecture multicore microprocessors. The new compilers allow OpenACC-enabled source code to be compiled for parallel execution on a multicore CPU or a GPU accelerator. “Our goal is to Read more…

The Case for Mixed-Precision Arithmetic

Oct 22, 2015 |

Everyone knows that 64-bit floating point arithmetic dominates in HPC. When a new Xeon or high-end GPU comes out, the most interesting spec to an HPCer is probably its peak double-precision flops performance, and yet… Along with the democratization of HPC and the rise of accelerators, so have new use cases for sub-FP64 and mixed precision arithmetic. One Read more…

Microsoft Puts GPU Boosters on Azure Cloud

Sep 29, 2015 |

Today at its AzureCon, Microsoft expanded the capabilities of its public cloud, Azure, with the addition of N-series GPU-enabled virtual machines available over a fast RDMA network. The company also announced that it is reducing prices on its high-end instances, A8-A11. Jazon Zander, corporate vice president at Microsoft Azure, began by presenting a view of Azure Read more…

AMD’s Exascale Strategy Hinges on Heterogeneity

Jul 29, 2015 |

In a recent IEEE Micro article, a team of engineers and computers scientists from chipmaker Advanced Micro Devices (AMD) detail AMD’s vision for exascale computing, which in its most essential form combines CPU-GPU integration with hardware and software support to facilitate the running of scientific workloads on exascale-class systems.

IDC: The Changing Face of HPC

Jul 16, 2015 |

At IDC’s annual ISC breakfast there was a good deal more than market update numbers although there were plenty of those: “We try to track every server sold, every quarter, worldwide,” said Earl Joseph, IDC program vice president and executive director HPC User Forum. Perhaps more revealing and as important this year was IDC’s unveiling Read more…

NVIDIA Wades Farther into Deep Learning Waters

Jul 7, 2015 |

Continuing the machine learning push that set the tone for this year’s GPU Technology Conference, NVIDIA is refreshing its GPU-accelerated deep learning software in tandem with the 2015 International Conference on Machine Learning (ICML), one of the major international conferences focused on the burgeoning domain. The announcement involves updates to CUDA, cuDNN, and DIGITS. Altogether the new features provide significant Read more…

Shining a Light on SKA’s Massive Data Processing Requirements

Jun 4, 2015 |

One of the many highlights of the fourth annual Asia Student Supercomputer Challenge (ASC15) was the MIC optimization test, which this year required students to optimize a gridding algorithm used in the world’s largest international astronomy effort, the Square Kilometre Array (SKA) project. Gridding is one of the most time-consuming steps in radio telescope data processing. Read more…

Tech Giants Battle for Image Recognition Supremacy

May 13, 2015 |

The race to exascale isn’t the only rivalry stirring up the advanced computing space. Artificial intelligence sub-fields, like deep learning, are also inspiring heated competition from tech conglomerates around the globe. When it comes to image recognition, computers have already passed the threshold of average human competency, leaving tech titans, like Baidu, Google and Microsoft, vying to Read more…

Machine Learning Guru Sees Future in Multi-GPU Clusters

Apr 30, 2015 |

Machine learning has made enormous strides in the few years, owing in large part to powerful and efficient parallel processing provided by general-purpose GPUs. The latest example of this trend is exemplified by a partnership between New York University’s Center for Data Science and NVIDIA. The mission, says the pair, is to develop next-gen deep learning Read more…