Today's Top Feature

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a

By Tiffany Trader

Center Stage

France’s CEA and Japan’s RIKEN to Partner on ARM and Exascale

France’s CEA and Japan’s RIKEN institute announced a multi-faceted five-year collaboration to advance HPC generally and prepare for exascale computing. Among the particulars are efforts to: build out the ARM ecosystem; work on code development and code sharing on the existing and future platforms; share expertise in specific application areas (material and seismic sciences for example); improve techniques for using numerical simulation with big data; and expand HPC workforce training. It seems to be a very full agenda.

By Nishi Katsuya and John Russell

ARM Waving: Attention, Deployments, and Development

It’s been a heady two weeks for the ARM HPC advocacy camp. At this week’s Mont-Blanc Project meeting held at the Barcelona Supercomputer Center, Cray announced plans to build an ARM-based supercomputer in the U.K. while Mont-Blanc selected Cavium’s ThunderX2 ARM chip for its third phase of development. Last week, France’s CEA and Japan’s Riken announced a deep collaboration aimed largely at fostering the ARM ecosystem. This activity follows a busy 2016 when SoftBank acquired ARM, OpenHPC announced ARM support, ARM released its SVE spec, Fujistu chose ARM for the post K machine, and ARM acquired HPC tool provider Allinea in December.

By John Russell

Spurred by Global Ambitions, Inspur in Joint HPC Deal with DDN

Inspur, the fast-growth cloud computing and server vendor from China that has several systems on

By Doug Black

HPCwire 2016 Readers’ and Editors’ Choice Awards

Who are the big winners for 2016? Come get a look at who is making a difference and showing why #HPCmatters.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Clemson Software Optimizes Big Data Transfers

January 11, 2017

Data-intensive science is not a new phenomenon as the high-energy physics and astrophysics communities can certainly attest, but today more and more scientists are facing steep data and throughput challenges fueled by soaring data volumes and the demands of global-scale collaboration. Read more…

By Tiffany Trader

Heading into SC16 CENATE Flexes its Growing Muscle

November 8, 2016

In September, the Center for Advanced Technology Evaluation (CENATE) at Pacific Northwest National Laboratory (PNNL) took possession of NVIDIA’s DGX-1 GPU-based (Pascal 100) supercomputer. Read more…

By John Russell

Argo Project: What’s In Store for Exascale OS

October 25, 2016

With the heady performance threshold that is exascale in sight, and the power, memory and concurrency challenges well-documented, no element of the hardware/software stack is free from scrutiny, including the operating system. Read more…

By Tiffany Trader

OpenHPC Pushes to Prove its Openness and Value at SC16

October 24, 2016

At SC15 last year the announcement of OpenHPC – the nascent effort to develop a standardized HPC stack to ease HPC deployment – drew a mix of enthusiasm and wariness; the latter in part because of Intel’s prominence in the group. There was general agreement that creating an open source, plug-and-play HPC stack was a good idea. Read more…

By John Russell

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Bank of Italy Converges HPC and Enterprise Office with New Cluster

October 10, 2016

The democratization of high performance computing (HPC) and the converged datacenter have been topics of late in the IT community. This is where HPC, high performance data analytics (big data/Hadoop workloads), and enterprise office applications all run on a common clustered compute architecture with a single file system and network. Read more…

By Ken Strandberg

Profile of a Data Science Pioneer

June 28, 2016

As he approaches retirement, Reagan Moore reflects on SRB, iRODS, and the ongoing challenge of helping scientists manage their data. In 1994, Reagan Moore managed the production computing systems at the San Diego Supercomputer Center (SDSC), a job that entailed running and maintaining huge Cray computing systems as well as networking, archival storage, security, job scheduling, and visualization systems. At the time, research was evolving from analyses done by individuals on single computers into a collaborative activity using distributed, interconnected and heterogeneous resources. Read more…

By Karen Green, RENCI

Paul Messina Shares Deep Dive Into US Exascale Roadmap

June 14, 2016

Ahead of ISC 2016, taking place in Frankfurt, Germany, next week, HPCwire reached out to Paul Messina to get an update on the deliverables and timeline for the United States' Exascale Computing Project. The ten-year project has been charged with standing up at least two capable exascale supercomputers in 2023 as part of the larger National Strategic Computing Initiative launched by the Obama Administration in July 2015. Read more…

By Tiffany Trader

Leading Solution Providers

  • arrow
  • Click Here for More Headlines
  • arrow

Whitepaper:

Sorting Fact from Fiction: HPC-enabled Engineering Simulations, On-premises or in the Cloud

HPC may once have been the sole province for huge corporations and national labs, but with hardware and cloud resources becoming more affordable even small and mid-sized companies are taking advantage.

Download this Report

Sponsored by ANSYS

Whitepaper:

Meeting Today’s Data Center Challenges

Between the demands of the data deluge and hardware advancements in both CPUs and GPUs alike, it’s no surprise that large HPC clusters are seeing rapid growth as a part of today’s Big Data escalation.

Download this Report

Sponsored by Chelsio

SpotlightON:

Advanced Scale Computing – Making the Case

Today’s leading organizations are dealing with larger data sets, higher volume and disparate data sources, and the need for faster insights. Don't fall behind to your competitors – discover big data made simple as we make the case for advanced-scale computing.

Download this Report

Sponsored by Zoomdata

Webinar:

Enabling Open Source High Performance Workloads with Red Hat

High performance workloads, big data, and analytics are increasingly important in finding real value in today's applications and data. Before we deploy applications and mine data for mission and business insights, we need a high-performance, rapidly scalable, resilient infrastructure foundation that can accurately, securely, and quickly access data from all relevant sources. Red Hat has technology that allows high performance workloads with a scale-out foundation that integrates multiple data sources and can transition workloads across on-premise and cloud boundaries.

Register to attend this LIVE webinar

Sponsored by Red Hat

Virtual Booth Tours

Not able to attend SC16?
See what you missed

SC16

Advanced Scale Career Development & Workforce Enhancement Center

Featured Advanced Scale Jobs:

Receive the Monthly
Advance Computing Job Bank Resource:

HPCwire Product Showcase

Subscribe to the Monthly
Technology Product Showcase:

Subscribe