Extreme Scale HPC: How Western Digital Corporation leveraged the virtually unlimited HPC capacity on AWS in their quest to speed up innovation and build better products

By Bala Thekkedath - Global HPC Marketing Lead, Amazon Web Services

December 10, 2018

Recently, AWS and Western Digital embarked on a very fun, challenging project of evaluating the impact of running their electro-magnetic simulations on a massive HPC cluster built on AWS using Amazon EC2 Spot Instances.   The lessons we learned and the results we were able to prove are very interesting and I am excited to share a quick overview here.

One of the biggest advantages of moving your HPC workloads to AWS is the ability to achieve extreme scales in terms of capacity and configurations – without a lot of upfront investment and heartache over long term commitments.  If you work for an organization that has moved HPC workloads to the cloud or has at least started the process by bursting to the cloud when demand spikes, you have experienced the agility and flexibility benefits afforded by the cloud.  You either have an individual account to access and request resources in the cloud or you request it via your HPC admin.  In both cases, you start building “your” cluster when you are ready. In most cases the cluster is built automatically by your job scheduler as you submit your jobs, and compute resources are ready within minutes. When the jobs are done, you shut down your cluster and stop paying for it.  When you request your cluster, unlike your on-premises environment, you can specify what type of CPUs (or GPUs, or FPGAs) you would like to run a particular application on.  Ever wonder how much faster your application would run if you had the latest CPU or GPU?  What if you wanted to determine if an I/O bandwidth optimized configuration versus CPU was better for parts of your workflow?   Well, now you can try many different configuration types without going through a cumbersome procurement process.  It becomes incredibly easy to fine tune specific portions of your HPC workflow, given the many different instance types available, and how easy it is to drop them into a workflow.   Then, there is the scale.   It does not matter if you request 1,000 cores for 8-hours or 8,000 cores for 1-hour.  You still pay the same.   So, if your application supports it, why not scale up your resources and get to results faster?

That is exactly what a recent collaborative project between AWS and Western Digital did.  First, a quick overview of the hard disk drive (HDD) market.  The HDD market is an extremely competitive one.  The ever-increasing demand for capacity from enterprises, particularly large hyper-scale data centers (like us) has been keeping Western Digital very busy.  Faced with the need to innovate to meet the growing demand for data storage capacity, the engineering teams at Western Digital are always pushing the limits of physics and engineering.  Enterprise HDDs are still confined to a 3.5 inches form factor (as they have been for years) with no chance to increase the size to accommodate additional capacity and performance requirements.  So, the only solution to meeting the increased capacity demands is to cram more bits into the same space and make sure the drives can handle the increasing demands for performance.  The technical term here is increasing the areal density of the media – meaning, keep on shrinking the geometry that you are allowed to use to capture the ones and zeros on the rotating media.  As you shrink those geometries, there are various aspects of cross talk, noise and atomic behavior that you have to comprehend to get to an ingredient combination that works 24x7x365, and can be manufactured at high volume. It is quite an art and science to get all those things to line up exactly, make it repeatable, make it manufacturable, make it operational, and, oh, by the way, get it to work for years without a failure.

A big focus of the engineering simulations work at Western Digital is to evaluate different combinations of technologies and/or solutions (or ingredients that make up the solutions) that goes into making new HDDs.  The basic design of the hard disk involves a rotating media and a head on a slider arm that moves over the media.  The engineering teams are looking at smaller and smaller geometries of recording channels on the media so they can fit more and more 1’s and 0’s or bits into the same space.  They are looking to achieve faster read and write times from that media.  The simulations thus involve many variable vectors to find the right combination of media, speed of rotation of the media, materials that constitute the media etc. that can provide that higher density and faster read-write times.   The end goal is to determine which combinations work and which don’t – and making sure those combinations that don’t work are avoided in the manufacturing process or in solutions/component recipes for the physical products.

As part of this precedent-setting collaborative work, Western Digital ran around 2.3 million simulation jobs on a Spot-based cluster of a little over one million vCPUs.   If they were to do those same 2.3 million simulations on a standard Spot based cluster of 16,000 vCPUs at a time (as they do today), it would have taken them about 20 days to get the same work done.     The idea of doing 20 days of work in 8 hours is a game changer.  The impacts go beyond the traditional business metrics – it is a great competitive advantage for a business that is driven by innovation.

So, what goes into scaling an application to run on extreme capacity infrastructure?  It is a coordinated effort between the application engineers, the infrastructure engineers, and the team at AWS.    At a 10K ft level, what we are doing here is taking a large statistical simulation, splitting it into jobs that run on a single vCPU, then when the jobs are done, bringing it all back and collating the results.   That requires work on both the application side and the infrastructure side.  The application has to ensure that the individual simulations are all done correctly, the infrastructure has to coordinate jobs across a vast number of servers/cores and bring all the data back to collate. What made this run even more interesting is that we used EC2 Spot instances, so the application had to be resilient for any job preemption or interruption that might happen. During the 8 hours run at the full one million vCPU scale, we experienced less than 1% of interruption. From an infrastructure point of view, we had to evaluate the limits that exists on number of underlying services (compute, storage, API calls) and since this was a cluster that was run all in a single region, but spanned multiple Availability Zones, we combined the features of AWS Spot Fleet with the highly-scalable cluster management and scheduling of Univa NavOps and GridEngine to coordinate cluster management across the wide capacity of our infrastructure and keep the cluster fully utilized even under such very high workload.

A few other points that are worth highlighting here.  First, Western Digital, Univa and AWS were able to fully exploit the configuration flexibility that running HPC workloads on the cloud offers.  Before embarking on this simulation, the engineers from both AWS and Western Digital spent a lot of prep time profiling the various instance types that Amazon EC2 offers. Through profiling this multitude of instance types (over 25 different instances types), we were able to land on the most optimal range of instances offering AVX acceleration for this workload, giving the AWS Spot Fleet the flexibility and freedom to find the cheapest and fastest hardware for the job.   Second, this simulation was also a major achievement in terms of the use of containers to run HPC workloads.  In this run, the entire application was ported onto containers, which is a big shift from having to haul around drivers and dependencies across jobs and VMs.   This run actually might have been one of the largest container fleets running a single application! Third, we used Amazon Simple Storage Service (Amazon S3) as the storage back-end for this simulation.  Being able to support this fast rate of data access at such massive scales required no tuning effort, as S3 bandwidth scaled gracefully and peaked at 7500 PUT/s.  And last, but not the least, this was a great example of how Spot Fleet can simplify cluster management.  In this particular case, we just had three Spot Fleet requests simultaneously and we were able to hit a million cores in the cluster in around 1 hour and 32 minutes!

To learn more, visit https://aws.amazon.com/hpc or reach out to your local AWS representative.

 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Nvidia Debuts Clara AI Toolkit with Pre-Trained Models for Radiology Use

March 19, 2019

AI’s push into healthcare got a boost yesterday with Nvidia’s release of the Clara Deploy AI toolkit which includes 13 pre-trained models for use in radiology. Clara, you may recall, is Nvidia’s biomedical platform Read more…

By John Russell

DARPA, NSF Seek Real-Time ML Processor

March 18, 2019

A new U.S. research initiative seeks to develop a processor capable of real-time learning while operating with the “efficiency of the human brain.” The National Science Foundation (NSF) and the Defense Advanced Re Read more…

By George Leopold

It’s Official: Aurora on Track to Be First U.S. Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaflops, will be delivered by the end of 2021 to Argonne Nation Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HPE and Intel® Omni-Path Architecture: How to Power a Cloud

Learn how HPE and Intel® Omni-Path Architecture provide critical infrastructure for leading Nordic HPC provider’s HPCFLOW cloud service.

powercloud_blog.jpgFor decades, HPE has been at the forefront of high-performance computing, and we’ve powered some of the fastest and most robust supercomputers in the world. Read more…

IBM Accelerated Insights

The Spark That Ignited A New World of Real-Time Analytics

High Performance Computing has always been about Big Data. It’s not uncommon for research datasets to contain millions of files and many terabytes, even petabytes of data, or more. Read more…

NASA’s Pleiades Simulates Launch Abort Scenarios

March 15, 2019

NASA is using flow simulations running on its Pleiades supercomputer to help design the agency’s next manned spacecraft, Orion. Crew safety is paramount, so NASA engineers are using the HPC cluster to simulate and v Read more…

By George Leopold

Nvidia Debuts Clara AI Toolkit with Pre-Trained Models for Radiology Use

March 19, 2019

AI’s push into healthcare got a boost yesterday with Nvidia’s release of the Clara Deploy AI toolkit which includes 13 pre-trained models for use in radiolo Read more…

By John Russell

It’s Official: Aurora on Track to Be First U.S. Exascale Computer in 2021

March 18, 2019

The U.S. Department of Energy along with Intel and Cray confirmed today that an Intel/Cray supercomputer, "Aurora," capable of sustained performance of one exaf Read more…

By Tiffany Trader

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Oil and Gas Supercloud Clears Out Remaining Knights Landing Inventory: All 38,000 Wafers

March 13, 2019

The McCloud HPC service being built by Australia’s DownUnder GeoSolutions (DUG) outside Houston is set to become the largest oil and gas cloud in the world th Read more…

By Tiffany Trader

Quick Take: Trump’s 2020 Budget Spares DoE-funded HPC but Slams NSF and NIH

March 12, 2019

U.S. President Donald Trump’s 2020 budget request, released yesterday, proposes deep cuts in many science programs but seems to spare HPC funding by the Depar Read more…

By John Russell

Nvidia Wins Mellanox Stakes for $6.9 Billion

March 11, 2019

The long-rumored acquisition of Mellanox came to fruition this morning with GPU chipmaker Nvidia’s announcement that it has purchased the high-performance net Read more…

By Doug Black

Optalysys Rolls Commercial Optical Processor

March 7, 2019

Optalysys, Ltd., a U.K. company seeking to advance it optical co-processor technology, moved a step closer this week with the unveiling of what it claims is th Read more…

By George Leopold

Intel Responds to White House AI Initiative

March 6, 2019

The Trump Administration’s release last month of the “American AI Initiative,” aimed at prioritizing federal R&D investments in machine intelligence, Read more…

By Doug Black

Quantum Computing Will Never Work

November 27, 2018

Amid the gush of money and enthusiastic predictions being thrown at quantum computing comes a proposed cold shower in the form of an essay by physicist Mikhail Read more…

By John Russell

The Case Against ‘The Case Against Quantum Computing’

January 9, 2019

It’s not easy to be a physicist. Richard Feynman (basically the Jimi Hendrix of physicists) once said: “The first principle is that you must not fool yourse Read more…

By Ben Criger

ClusterVision in Bankruptcy, Fate Uncertain

February 13, 2019

ClusterVision, European HPC specialists that have built and installed over 20 Top500-ranked systems in their nearly 17-year history, appear to be in the midst o Read more…

By Tiffany Trader

Intel Reportedly in $6B Bid for Mellanox

January 30, 2019

The latest rumors and reports around an acquisition of Mellanox focus on Intel, which has reportedly offered a $6 billion bid for the high performance interconn Read more…

By Doug Black

Looking for Light Reading? NSF-backed ‘Comic Books’ Tackle Quantum Computing

January 28, 2019

Still baffled by quantum computing? How about turning to comic books (graphic novels for the well-read among you) for some clarity and a little humor on QC. The Read more…

By John Russell

Why Nvidia Bought Mellanox: ‘Future Datacenters Will Be…Like High Performance Computers’

March 14, 2019

“Future datacenters of all kinds will be built like high performance computers,” said Nvidia CEO Jensen Huang during a phone briefing on Monday after Nvidia revealed scooping up the high performance networking company Mellanox for $6.9 billion. Read more…

By Tiffany Trader

Contract Signed for New Finnish Supercomputer

December 13, 2018

After the official contract signing yesterday, configuration details were made public for the new BullSequana system that the Finnish IT Center for Science (CSC Read more…

By Tiffany Trader

Deep500: ETH Researchers Introduce New Deep Learning Benchmark for HPC

February 5, 2019

ETH researchers have developed a new deep learning benchmarking environment – Deep500 – they say is “the first distributed and reproducible benchmarking s Read more…

By John Russell

Leading Solution Providers

SC 18 Virtual Booth Video Tour

Advania @ SC18 AMD @ SC18
ASRock Rack @ SC18
DDN Storage @ SC18
HPE @ SC18
IBM @ SC18
Lenovo @ SC18 Mellanox Technologies @ SC18
NVIDIA @ SC18
One Stop Systems @ SC18
Oracle @ SC18 Panasas @ SC18
Supermicro @ SC18 SUSE @ SC18 TYAN @ SC18
Verne Global @ SC18

IBM Quantum Update: Q System One Launch, New Collaborators, and QC Center Plans

January 10, 2019

IBM made three significant quantum computing announcements at CES this week. One was introduction of IBM Q System One; it’s really the integration of IBM’s Read more…

By John Russell

IBM Bets $2B Seeking 1000X AI Hardware Performance Boost

February 7, 2019

For now, AI systems are mostly machine learning-based and “narrow” – powerful as they are by today's standards, they're limited to performing a few, narro Read more…

By Doug Black

The Deep500 – Researchers Tackle an HPC Benchmark for Deep Learning

January 7, 2019

How do you know if an HPC system, particularly a larger-scale system, is well-suited for deep learning workloads? Today, that’s not an easy question to answer Read more…

By John Russell

HPC Reflections and (Mostly Hopeful) Predictions

December 19, 2018

So much ‘spaghetti’ gets tossed on walls by the technology community (vendors and researchers) to see what sticks that it is often difficult to peer through Read more…

By John Russell

Arm Unveils Neoverse N1 Platform with up to 128-Cores

February 20, 2019

Following on its Neoverse roadmap announcement last October, Arm today revealed its next-gen Neoverse microarchitecture with compute and throughput-optimized si Read more…

By Tiffany Trader

Move Over Lustre & Spectrum Scale – Here Comes BeeGFS?

November 26, 2018

Is BeeGFS – the parallel file system with European roots – on a path to compete with Lustre and Spectrum Scale worldwide in HPC environments? Frank Herold Read more…

By John Russell

France to Deploy AI-Focused Supercomputer: Jean Zay

January 22, 2019

HPE announced today that it won the contract to build a supercomputer that will drive France’s AI and HPC efforts. The computer will be part of GENCI, the Fre Read more…

By Tiffany Trader

Microsoft to Buy Mellanox?

December 20, 2018

Networking equipment powerhouse Mellanox could be an acquisition target by Microsoft, according to a published report in an Israeli financial publication. Microsoft has reportedly gone so far as to engage Goldman Sachs to handle negotiations with Mellanox. Read more…

By Doug Black

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This