Amazon Climbs Into the HPC Arena

By Michael Feldman

July 14, 2010

Amazon’s cloud platform got a high performance boost this week with the announcement of its Cluster Compute Instances (CCI). CCI specifically targets HPC workloads, incorporating high-end CPU horsepower and a low-latency interconnect fabric into the company’s popular EC2 computing on-demand offering. The new capability welcomes HPC into the most well-recognized public cloud in the world.

In a nutshell, the new offering is based on a new EC2 instance under the CCI category: the Cluster Compute Quadruple Extra Large Instance, which, for the sake of brevity, I’m going to refer to as the HPC instance. It is defined as of a dual-socket Intel Xeon X5570 (2.93 GHz, quad-core) server or virtual server with 23 GB of memory, and 1,690 GB of external storage. Servers are connected via a 10 Gigabit Ethernet network. The HPC instance is the ninth EC2 instance type offered by Amazon and the only one that actually spells out the specific CPU and I/O fabric being employed. For the other eight instances, you are provided a generic notion of capability based on a specified number of EC2 compute units and a general metric for network I/O performance (moderate or high).

For users of the HPC instance, the default cluster size (aka the instance limit) is eight servers, providing 64 cores. That’s probably the sweet spot for the type of customer Amazon is going after — presumably middle-range HPC users with moderately scalable applications. But, as in any computing on-demand offering worthy of that title, capacity can be extended dynamically.

“An instance limit is only an initial limit and can be easily removed by sending us an email, just like any other Amazon EC2 instance,” said Deepak Singh, business development manager for Amazon Web Services (AWS), in an email to HPCwire. “Customers can provision instances in minutes and shut them down and restart as they need in a truly scalable and elastic environment.” The exact extent of this elasticity is somewhat of a mystery though. And at this point, Amazon is not revealing how big a cluster can be devoted to a single customer.

It’s worth noting that Amazon has run Linpack on 880 of their HPC-style servers, reporting a performance result of 41.82 teraflops. That’s well into TOP500 territory (equivalent to the 146 slot on the June 2010 list). It’s also worth noting that, according to Intel, the peak performance on the Xeon X5570 CPU is 46.88 gigaflops, which means the Linpack efficiency for the EC2 cluster is just a shade over 50 percent. That’s pretty much on par with vanilla GigE clusters, although the best 10 GbE cluster can hit 84 percent Linpack efficiency and most InfiniBand-based systems will be in the 70 to 92 percent range.

Customers won’t care about unimpressive Linpack yields, but it may remind potential users that even the new HPC instance may behave less like a supercomputer than they might be expecting. Amazon has provided few details about the 10 GbE setup or how the Hardware Virtual Machine (HVM) virtualization scheme being employed might impact performance. And since there are no performance metrics publicly available for real applications, it’s too early to tell how traditional MPI codes will fare. To its credit, Amazon is being careful not to make claims it can’t demonstrate.

“During our private beta period, customers ran a variety of MPI codes, including MATLAB, in-house computational fluid dynamics software for aircraft and automobile design, and molecular dynamics codes for protein simulation like NAMD,” said Singh. “Our partners and AWS used standard benchmark packages like HPCC and IMB. Now that the service is available to the broad public, we expect an increased variety in the types of applications our customers will be running.”

The Magellan Cloud research team at the National Energy Research Scientific Computing Center (NERSC) was one of those beta customers and got a chance to test drive the new EC2 offering prior to this week’s official launch. They reported that a series of HPC application benchmarks “ran 8.5 times faster on Cluster Compute Instances for Amazon EC2 than the previous EC2 instance types.” But considering the lesser CPUs and GigE configurations on the non-HPC instances, that may end up being faint praise.

EC2 has surely left some room at the high end for more performant on-demand platforms and for customers that require a greater level of HPC expertise than Amazon can muster. Experienced HPC vendors like IBM, SGI, Penguin Computing, and others are already staking out this territory. While those vendors may be gratified that a company like Amazon thinks the HPC on-demand model is ready for prime time, those same companies will now have to prove their offerings are better than Amazon’s.

Penguin Computing seems more than willing to make that case. From CEO Charles Wuischpard’s point of view, his company’s one-year old Penguin On-Demand (POD) HPC rental service has some clear differentiation with Amazon’s new HPC offering. At the hardware level, POD offers more memory per core than EC2, InfiniBand connectivity, a GPU acceleration option, and Panasas-based parallel file storage.

But the big differentiator, according to Wuischpard, is the level of engineering support they’re able to provide. Every POD deal comes with its own HPC engineer, who makes sure the whole software stack — cluster management, network drivers, compilers, and so on — is configured correctly for the end-user applications. “The customers we have today are truly not computer scientists and we help them through the whole process,” said Wuischpard.

Unit pricing is somewhat comparable. POD charges $0.25 per core hour for compute time, while Amazon offers one HPC instance (two quad-core CPUs) for 1.60 per hour. Both provide cost incentives for longer time commitments. But overall, Wuischpard thinks POD will offer better value than Amazon. It should be remembered that wall clock time is the key metric here. If an on-demand platform can run a given application twice as fast as their competitor, they’ve effectively cut their per unit cost in half. “As long as we’re less expensive overall, I’m pretty comfortable with where we are,” said Wuischpard.

For a broader perspective of Amazon’s HPC launch, see Amazon Adds HPC Capability to EC2 and related coverage at HPC in the Cloud.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

UCSD Web-based Tool Tracking CA Wildfires Generates 1.5M Views

October 16, 2017

Tracking the wildfires raging in northern CA is an unpleasant but necessary part of guiding efforts to fight the fires and safely evacuate affected residents. One such tool – Firemap – is a web-based tool developed b Read more…

By John Russell

Exascale Imperative: New Movie from HPE Makes a Compelling Case

October 13, 2017

Why is pursuing exascale computing so important? In a new video – Hewlett Packard Enterprise: Eighteen Zeros – four HPE executives, a prominent national lab HPC researcher, and HPCwire managing editor Tiffany Trader Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

“Lunch & Learn” to Explore the Growing Applications of Genomic Analytics

In the digital age of medicine, healthcare providers are rapidly transforming their approach to patient care. Traditional technologies are no longer sufficient to process vast quantities of medical data (including patient histories, treatment plans, diagnostic reports, and more), challenging organizations to invest in a new style of IT to enable faster and higher-quality care. Read more…

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan and will begin operation in fiscal 2018 (starts in April). A Read more…

By John Russell

Intel Delivers 17-Qubit Quantum Chip to European Research Partner

October 10, 2017

On Tuesday, Intel delivered a 17-qubit superconducting test chip to research partner QuTech, the quantum research institute of Delft University of Technology (TU Delft) in the Netherlands. The announcement marks a major milestone in the 10-year, $50-million collaborative relationship with TU Delft and TNO, the Dutch Organization for Applied Research, to accelerate advancements in quantum computing. Read more…

By Tiffany Trader

Fujitsu Tapped to Build 37-Petaflops ABCI System for AIST

October 10, 2017

Fujitsu announced today it will build the long-planned AI Bridging Cloud Infrastructure (ABCI) which is set to become the fastest supercomputer system in Japan Read more…

By John Russell

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Intel Debuts Programmable Acceleration Card

October 5, 2017

With a view toward supporting complex, data-intensive applications, such as AI inference, video streaming analytics, database acceleration and genomics, Intel i Read more…

By Doug Black

OLCF’s 200 Petaflops Summit Machine Still Slated for 2018 Start-up

October 3, 2017

The Department of Energy’s planned 200 petaflops Summit computer, which is currently being installed at Oak Ridge Leadership Computing Facility, is on track t Read more…

By John Russell

US Exascale Program – Some Additional Clarity

September 28, 2017

The last time we left the Department of Energy’s exascale computing program in July, things were looking very positive. Both the U.S. House and Senate had pas Read more…

By Alex R. Larzelere

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Leading Solution Providers

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Cente Read more…

By Linda Barney

  • arrow
  • Click Here for More Headlines
  • arrow
Share This