Benchmarking HPC in the Cloud

By Tiffany Trader

June 10, 2014

All clouds are not the same. It’s an adage that rings especially true when it comes to running high-performance computing (HPC) workloads. HPC middleware solutions vendor Techila Technologies recently took the time to benchmark and analyze three of the top cloud platforms – Amazon Web Services, Google Compute Engine, and Microsoft Azure – in the context of several real-world high-performance computing scenarios. The results are detailed in a subsequent report, titled simply “Cloud Benchmark – Round 1.”

“If the technical features of a cloud do not align with the needs of business, a solution which looks cost efficient can have a high cost of ownership.” This observation by Techila speaks to why the benchmarking was carried out, to explore which cloud offerings and instance types work best for a given application.

Techila HPC cloud benchmark Table1

Techila explains that the benchmark experiment was intended to provide HPC customers with an easy-to-understand analysis. Potential cloud adopters have told the company that FLOPS-per-dollar and Gbps-per-dollar are interesting but do not adequately answer their questions or address their concerns.

“Raw processor power, available memory, or theoretical maximum data transfer rate do not always translate directly to application performance,” writes Techila. “Because of this, the focus of [the] benchmark experiment is on testing the performance of AWS, Google Compute Engine GCE, and Azure in real-world HPC use-cases, and on studying how the leading clouds can respond to requirements arising from HPC scenarios.”

The test suite that Techila used was developed with the participation of cloud providers and users of MATLAB, R programming language, and simulation-backed engineering tools. After the first round of testing, the primary conclusion was that not all platforms demonstrate the same level of elasticity.

Tests fell into two categories: deployment and application performance. The first test zeroed in on a cloud’s ability to respond to computing needs. The focus was directed to embarrassingly parallel problems, which can scale to best use a large number of cores. (Techila says it is planning MPI-like tests in the future.)

The experiment set out to answer several questions, such as:

What instance types provide the best performance? Should I use the most expensive instance types?
Does the operating system of the cloud have effect on the throughput of the system?
Should I worry about the internal infrastructure of the cloud?

For convenience, Techila provides a chart of each cloud’s technical specifications (see above). With regard to instance types, for Azure, the report looked at A8 (with Windows) and the Extra Large (A4) (also with Windows). For AWS, two implementations of c3.8xlarge were examined, one with Windows and one with Linux. And for Google Compute Engine (GCE), they used n1-standard-8 (with Debian 7).

While cloud pricing has gone through many revisions, the prices at the time of the experiment are also listed. The price per CPU core/hour in US dollars ranges from .060 (for AWS with Linux) to .306 for Azure A8.

The deployment tests analyzed the deployment of a 256 CPU core virtual HPC environment in a cloud. Among the interesting findings, Techila observed that deployments with Microsoft Windows operating system take longer than instance types with a Linux operating system. The authors suggest this is likely related to System Preparation (Sysprep) phase, which occurs during the installation of Microsoft Windows.

Techila HPC cloud benchmark Fig1

Another finding relates to the shape of the AWS c3.8xlarge and Azure A8 Windows instances. The deployment is not linear. The report’s authors suggest that “a possible reason for this is that the availability of these instance types is still quite limited and datacenters have challenges in responding to a request for a large number of these instance types.”

Testing deployment on Azure was not possible in this experiment because Azure is designed as a Platform-as-a-Service (PaaS) and does not provide the needed Java management interfaces for the current version of the Techila Deployment Tool.

The configuration tests examined how MATLAB-based applications fare in a 256 CPU core virtual HPC environment. The findings show that configuration of an instance was slower in Azure than the other cloud offerings. They reason that this could be do to Azure’s PaaS-based design. AWS and GCW, however provide direct access to the infrastructure. “Because of the limitations of Azure’s PaaS design Techila middleware can not support Peer-to-Peer (P2P) transfer technology inside the HPC environment in Azure,” note the report’s authors.

Another key observation was that configuring the AWS instance was quicker with Linux than Windows. While the experimenters can’t confirm the basis for this, they think it might be explained by file system capabilities. The data transferred was said to contain approximately 33,000 files, and it’s been suggested that the file system on Windows performs slower when handling a large number of rather small files.

The HPC application tests looked at three common application scenarios:

  • model calibration (using MATLAB code)
  • portfolio simulation (implemented in R)
  • machine learning (implemented in C++)

Techila provides detailed assessments of each application case, with charts that include Wall-clock time, price per CPU core and cost of cloud computing.

Here are several of the interesting observations made by the experimenters:

For MATLAB code:

“The findings show that in this particular scenario MATLAB seems to perform better in Windows environment than on Linux environments.”

For R users:

“An interesting observation is related to the performance of AWS c3.8xlarge performance. When compared to Azure A8 and Azure Extra Large, we can see that in this case, the Azure Extra Large provides a very similar performance as AWS c3.8xlarge, and Azure A8 provides double performance compared to AWS c3.8xlarge and Azure Extra Large. Because the cost of Azure Extra Large is affordable and Azure supports a fine granularity billing, this can make Azure Extra Large a great value option for users of R programming language.”

“Another interesting observation is that in this case AWS c3.xlarge with Linux provides clearly better performance than AWS c3.8xlarge running Windows operating system.”

For machine learning:

“Another interesting observation is that in this specific case Azure A8 and AWS c3.8xlarge with Windows operating system provided very similar performance, despite of differences observed in other test cases. It was suggested that this could be related to the fact that some scenarios are well suited for hyper threading and can benefit of it. Because of this, if the goal is to get the most out of a hyper threading platform, it is important to understand the suitability of the applications for the platform.”

Based on the results of Techila’s first cloud benchmarking round, the company is confident that cloud computing will have a role to play in HPC. The experimenters also believe that cloud will have a profound democratizing effect on HPC, writing:

“HPC will no longer be science, which would require special training and expensive upfront investments. Cloud will bring HPC to new desks and simplified user experience will empower new users to benefit of it.”

The testing process also served as a reminder that commercial cloud platforms follow more of a hardware path in that they don’t use version numbering. Vendors are constantly pushing out new instance types and features, and prices too are under constant revision. Because of this, any benchmarking must be regarded as work in progress. To stay relevant with these changes, Techila is planning to keep its report up to date by repeating tests periodically.

Techila also raises the point that elasticity is not truly unlimited. Resource provisioning, even at the scale of Amazon, etc., is still limited by physical boundaries. Aside from impacting the planning stage, Techila maintains that the physical architecture is the reason why HPC in the cloud needs middleware.

“Performing such experiments in a loosely coupled infrastructure, such as the cloud, requires a middleware, which enables horizontal scaling and can hide the possible nonlinearities of the physical infrastructure,” the report states. “After all, cloud is built of very similar units what we see in our offices. When we come to the limits to the physical unit’s scalability, we need a solution, which enables scaling over the limit, which in this experiment was the Techila HPC middleware.”

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’s introduction of an ARM-based system (XC-50) last November. Read more…

By John Russell

HPE Extreme Performance Solutions

HPC and AI Convergence is Accelerating New Levels of Intelligence

Data analytics is the most valuable tool in the digital marketplace – so much so that organizations are employing high performance computing (HPC) capabilities to rapidly collect, share, and analyze endless streams of data. Read more…

Hennessy & Patterson: A New Golden Age for Computer Architecture

April 17, 2018

On Monday June 4, 2018, 2017 A.M. Turing Award Winners John L. Hennessy and David A. Patterson will deliver the Turing Lecture at the 45th International Symposium on Computer Architecture (ISCA) in Los Angeles. The Read more…

By Staff

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Nvidia Ups Hardware Game with 16-GPU DGX-2 Server and 18-Port NVSwitch

March 27, 2018

Nvidia unveiled a raft of new products from its annual technology conference in San Jose today, and despite not offering up a new chip architecture, there were still a few surprises in store for HPC hardware aficionados. Read more…

By Tiffany Trader

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This