Researchers at Georgia Institute of Technology and University of Southern California will receive nearly $2 million in federal funding for the creation of tools that will help developers exploit hardware accelerators in a cost-effective and power-efficient manner. The purpose of this three-year NSF grant is to bring formerly niche supercomputing capabilities into the hands of Read more…
The global distributed computing system known as the Worldwide LHC Computing Grid (WLCG) brings together resources from more than 150 computing centers in nearly 40 countries. Its mission is to store, distribute and analyze the 25 petabytes of data generated each year by the Large Hadron Collider (LHC), based out of the European Laboratory for Particle Physics (CERN) in Geneva, Switzerland. Read more…
Not content to let the Tianhe-2 announcement ride alone, Intel rolled out a series of announcements around its Knights Corner and Xeon Phi products–all of which are aimed at adding some options and variety for a wider base of potential users across the HPC spectrum. Today at the International Supercomputing Conference, the company’s Raj….
With help from a draft report from Jack Dongarra of the University of Tennessee and Oak Ridge National Laboratory, who also spearheads the process of verifying the top of the pack super, we are able to share the full processor, Xeon Phi coprocessor, custom interconnect, storage and memory, as well as power and cooling information. The supercomputer out of China will be…
This week we’re at the IDC User Forum in Tucson, staying cool amidst some heated talks about which processor, coprocessor and accelerator approaches are going to push into the lead in the next few years. To take this pulse, we sat down with IDC’s Steve Conway to talk about some general trends that are a tall drink of water for a few key vendors, including Intel, NVIDIA…..
<img src=”http://media2.hpcwire.com/hpcwire/Cloud_Storage_and_Bioinformatics_in_a_private_cloud_Fig._3_150x.png” alt=”” width=”95″ height=”95″ />The top research stories of the week include an evaluation of sparse matrix multiplication performance on Xeon Phi versus four other architectures; a survey of HPC energy efficiency; performance modeling of OpenMP, MPI and hybrid scientific applications using weak scaling; an exploration of anywhere, anytime cluster monitoring; and a framework for data-intensive cloud storage.
<img src=”http://media2.hpcwire.com/hpcwire/Penguin_Computing_logo_172x.jpg” alt=”” width=”101″ height=”59″ />Penguin Computing keeps finding increasing demand for servers that go heavy on the GPUs (or other coprocessors). Based on feedback from one such customer, it has designed the Relion 2808GT server, which it says now has the highest compute density of any server on the market.
<img style=”float: left;” src=”http://media2.hpcwire.com/hpcwire/FirePro_S10000_Angle_black_180.jpg” alt=”” width=”98″ height=”85″ />AMD is launching its most powerful graphics card yet: the dual-GPU FirePro S10000 promises 5.91 teraflops of peak single precision and 1.48 teraflops of peak double precision floating point performance. And with AMD’s “Graphics Core Next” (GCN) architecture under the hood, the S10000 can deliver compute and graphics processing simultaneously.
The HPC community has been dabbling with Field Programmable Gate Arrays (FPGAs) for several years now, but the technology has never reached escape velocity. But at SC08 this week, startup Convey Computer Corp. launched a new server and software stack that aims to tame FPGAs and deliver reconfigurable computing to everyday HPC users.