Today's Top Feature

HPC Technique Propels Deep Learning at Scale

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique

By Tiffany Trader

Center Stage

IDC: Will the Real Exascale Race Please Stand Up?

So the exascale race is on. And lots of organizations are in the pack. Government

By Bob Sorensen

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR).

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo

By Tiffany Trader

People to Watch 2017

With 2017 underway, we’re looking to the future of high performance computing and the milestones that are growing ever closer.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

New File System from PSC Tackles Image Processing on the Fly

July 25, 2016

Processing the high-volume datasets, particularly image data, generated by modern scientific instruments is a huge challenge. Read more…

By John Russell

Inside the Fire: TACC Image of Rapidly Spinning Star

December 11, 2015

A computer generated image of visualized variables from a star simulation dataset generated with Anelastic Spherical Harmonic code on the Ranger supercomputer at the Texas Advanced Computing Center at the Univeristy of Texas at Austin. Read more…

Contrary View: CPUs Sometimes Best for Big Data Visualization

December 1, 2015

Contrary to conventional thinking, GPUs are often not the best vehicles for big data visualization. Read more…

By Jim Jeffers, Intel

Big Data Reveals Glorious Animation of Antarctic Bottom Water

November 30, 2015

A remarkably detailed animation of the movement of the densest and coldest water in the world around Antarctica has been produced using data generated on Australia’s most powerful supercomputer, Raijin. Read more…

NSF-Funded CADENS Project Seeking Data and Visualizations

November 24, 2015

The NSF-funded Centrality of Advanced Digitally ENabled Science (CADENS) project is looking for scientific data to visualize or existing data visualizations to weave into larger documentary narratives in a series of fulldome digital films and TV programs aimed at broad public audiences. Read more…

Mira is First Supercomputer to Simulate Large Hadron Collider Experiments

November 4, 2015

Argonne physicists are using Mira to perform simulations of Large Hadron Collider (LHC) experiments with a leadership-class supercomputer for the first time, shedding light on a path forward for interpreting future LHC data. Read more…

By Jim Collins

ESnet Releases Software for Building Interactive Network Portals

October 5, 2015

When ESnet, the Department of Energy’s (DOE) Energy Sciences Network, unveiled its online interactive network portal called MyESnet in July of 2011, the reaction was strongly positive – other research and education networks liked it so much, they wanted the code to create their own portals. Read more…

By Jon Bashor, LBNL Computing Sciences Communications Manager

Exploring Large Data for Scientific Discovery

August 27, 2015

A curse of dealing with mounds of data so massive that they require special tools, said computer scientist Valerio Pascucci, is if you look for something, you will probably find it, thus injecting bias into the analysis. Read more…

By Scott Gibson, Communications Specialist, University of Tennessee

Leading Solution Providers

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper:

Sorting Fact from Fiction: HPC-enabled Engineering Simulations, On-premises or in the Cloud

HPC may once have been the sole province for huge corporations and national labs, but with hardware and cloud resources becoming more affordable even small and mid-sized companies are taking advantage.

Download this Report

Sponsored by ANSYS

Webinar:

Enabling Open Source High Performance Workloads with Red Hat

High performance workloads, big data, and analytics are increasingly important in finding real value in today's applications and data. Before we deploy applications and mine data for mission and business insights, we need a high-performance, rapidly scalable, resilient infrastructure foundation that can accurately, securely, and quickly access data from all relevant sources. Red Hat has technology that allows high performance workloads with a scale-out foundation that integrates multiple data sources and can transition workloads across on-premise and cloud boundaries.

Register to attend this LIVE webinar

Sponsored by Red Hat

Virtual Booth Tours

Not able to attend SC16?
See what you missed

SC16

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advanced Computing Job Bank Resource:

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

Subscribe