Today's Top Feature

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

To a large degree IBM and the OpenPOWER Foundation have done what they said they

By John Russell

Center Stage

UberCloud Cites Progress in HPC Cloud Computing

200 HPC cloud experiments, 80 case studies, and a ton of hands-on experience gained, that’s

By Wolfgang Gentzsch and Burak Yenier

  • Off The Wire

  • Industry Headlines

A Conversation with Women in HPC Director Toni Collis

In this SC16 video interview, HPCwire Managing Editor Tiffany Trader sits down with Toni Collis,

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on.

By John Russell

HPCwire 2016 Readers’ and Editors’ Choice Awards

Who are the big winners for 2016? Come get a look at who is making a difference and showing why #HPCmatters.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Fast Rewind: 2016 Was a Wild Ride for HPC

December 23, 2016

Some years quietly sneak by – 2016 not so much. It’s safe to say there are always forces reshaping the HPC landscape but this year’s bunch seemed like a noisy lot. Among the noisemakers: TaihuLight, DGX-1/Pascal, Dell EMC & HPE-SGI et al., KNL to market, OPA-IB chest thumping, Fujitsu-ARM, new U.S. President-elect, BREXIT, JR’s Intel Exit, Exascale (whatever that means now), NCSA@30, whither NSCI, Deep Learning mania, HPC identity crisis…You get the picture. Read more…

By John Russell

US Moves Exascale Goalpost, Targets 2021 Delivery

December 12, 2016

During SC16, Exascale Computing Project Director Paul Messina hinted at an accelerated timeline for reaching exascale in the US and now we have official confirmation from Dr. Messina that the US is contracting its exascale timeline by one year. Read more…

By Tiffany Trader

US Exascale Computing Update with Paul Messina

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers.  Read more…

By Tiffany Trader

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

Japan Plans Super-Efficient AI Supercomputer

November 28, 2016

Japan intends to deploy a 130-petaflops (half-precision) supercomputer by early 2018 as part of a 19.5 billion yen ($173 million) project called ABCI (for AI Bridging Cloud Infrastructure). Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

SC16 Precision Medicine Panel Proves HPC Matters

November 16, 2016

In virtually every way, precision medicine (PM) is a poster child for the HPC Matters mantra and was a good choice for the Monday panel opening SC16 (HPC Impacts on Precision Medicine: Life’s Future – The Next Frontier in Healthcare). PM’s tantalizing promise is to touch all of us, not just writ large but individually – effectively fighting disease, enhancing health and lifestyle, extending life, and necessarily contributing to basic science along the way. All of this can only done with HPC. Read more…

By John Russell

Leading Solution Providers

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper:

Sorting Fact from Fiction: HPC-enabled Engineering Simulations, On-premises or in the Cloud

HPC may once have been the sole province for huge corporations and national labs, but with hardware and cloud resources becoming more affordable even small and mid-sized companies are taking advantage.

Download this Report

Sponsored by ANSYS

Whitepaper:

Meeting Today’s Data Center Challenges

Between the demands of the data deluge and hardware advancements in both CPUs and GPUs alike, it’s no surprise that large HPC clusters are seeing rapid growth as a part of today’s Big Data escalation.

Download this Report

Sponsored by Chelsio

SpotlightON:

Advanced Scale Computing – Making the Case

Today’s leading organizations are dealing with larger data sets, higher volume and disparate data sources, and the need for faster insights. Don't fall behind to your competitors – discover big data made simple as we make the case for advanced-scale computing.

Download this Report

Sponsored by Zoomdata

Webinar:

Enabling Open Source High Performance Workloads with Red Hat

High performance workloads, big data, and analytics are increasingly important in finding real value in today's applications and data. Before we deploy applications and mine data for mission and business insights, we need a high-performance, rapidly scalable, resilient infrastructure foundation that can accurately, securely, and quickly access data from all relevant sources. Red Hat has technology that allows high performance workloads with a scale-out foundation that integrates multiple data sources and can transition workloads across on-premise and cloud boundaries.

Register to attend this LIVE webinar

Sponsored by Red Hat

Virtual Booth Tours

Not able to attend SC16?
See what you missed

SC16

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advance Computing Job Bank Resource:

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

Subscribe