Tag: NVIDIA

Bioinformatics Challenge: Fat Nodes, Skinny Nodes, or Hybrids

Oct 12, 2015 |

Life sciences organizations typically have a variety of computational requirements and picking the best cluster server architecture has always been a challenge. Do you go with fat nodes typified by their large number of cores per node or thin (sometimes called skinny) nodes with fewer, faster performing cores per node? Certainly, the choice will depend Read more…

Microsoft Puts GPU Boosters on Azure Cloud

Sep 29, 2015 |

Today at its AzureCon, Microsoft expanded the capabilities of its public cloud, Azure, with the addition of N-series GPU-enabled virtual machines available over a fast RDMA network. The company also announced that it is reducing prices on its high-end instances, A8-A11. Jazon Zander, corporate vice president at Microsoft Azure, began by presenting a view of Azure Read more…

IBM/OpenPOWER Seek Traction in Cloud Infrastructure

Sep 23, 2015 |

More is always better when building an ecosystem and the recent selection of an IBM POWER8 system (S822L) by French hosting and services provider Online to put into its bare metal cloud environment is more evidence of OpenPOWER and IBM ambitions to gain (NYSE: IBM) traction in the cloud provider space. Online, a member the Read more…

Today’s Outlook: GPU-accelerated Weather Forecasting

Sep 15, 2015 |

Weather forecasting has always been challenging. When Hurricane Sandy raced up the Atlantic coast, U.S.-run models famously suggested it would track out to sea while a European model predicted a sharp left and landfall, which sadly happened. Forecasting is a high stakes activity. Today, the Swiss Federal Office of Meteorology and Climatology (MeteoSwiss) announced use Read more…

Mellanox, IBM, ORNL Spearhead UCX Framework Initiative

Jul 13, 2015 |

Interconnect technology specialist Mellanox announced at ISC today a collaboration to develop a new open-source network communication framework – United Communication X Framework (UCX) – for high- performance and data-centric applications. The effort is intended to provide platform abstractions supporting various communication technologies for “high performance compute, and data platforms” and will help pave the Read more…

NVIDIA Wades Farther into Deep Learning Waters

Jul 7, 2015 |

Continuing the machine learning push that set the tone for this year’s GPU Technology Conference, NVIDIA is refreshing its GPU-accelerated deep learning software in tandem with the 2015 International Conference on Machine Learning (ICML), one of the major international conferences focused on the burgeoning domain. The announcement involves updates to CUDA, cuDNN, and DIGITS. Altogether the new features provide significant Read more…

Is IBM Getting Openness Right? Yes, Says GM Doug Balog

Jun 29, 2015 |

The annual Red Hat Summit, held in Boston last week, is something of revival tent for open source where the pulpits are plentiful and so are smiling believers. Indeed it’s hard to dispute the powerful innovation springing from open source. The RH Summit, which started 11 years ago as a modest celebration of open source Read more…

Sumit Gupta Leaves NVIDIA to Join IBM

May 19, 2015 |

Long-time industry executive Sumit Gupta has left GPU-pioneer NVIDIA, were he was GM, GPU Accelerated Data Center Computing, to join IBM as VP, HPC & OpenPOWER Operations. Executive changes in the HPC world are common but changes at this executive level are rare. Few details are known but IBM provided the background answers below to Read more…

Tech Giants Battle for Image Recognition Supremacy

May 13, 2015 |

The race to exascale isn’t the only rivalry stirring up the advanced computing space. Artificial intelligence sub-fields, like deep learning, are also inspiring heated competition from tech conglomerates around the globe. When it comes to image recognition, computers have already passed the threshold of average human competency, leaving tech titans, like Baidu, Google and Microsoft, vying to Read more…

Machine Learning Guru Sees Future in Multi-GPU Clusters

Apr 30, 2015 |

Machine learning has made enormous strides in the few years, owing in large part to powerful and efficient parallel processing provided by general-purpose GPUs. The latest example of this trend is exemplified by a partnership between New York University’s Center for Data Science and NVIDIA. The mission, says the pair, is to develop next-gen deep learning Read more…