Bad Moon Rises Over Cloud Perceptions

By Nicole Hemsoth

April 28, 2011

Let’s pretend for a moment that you are the owner or technical lead on a web application that recently captured the public’s attention and swelled in popularity—to the point that serving those visitors or customers without building a new data center would be impossible.

Instead of resorting to that up-front investment, what you need, of course, is infrastructure that will scale with your traffic or transactions—a resource that will allow you to avoid the cost and expertise required to maintain a system robust enough to handle the flood. What you need, in other words, is a cloud-based infrastructure provider—preferably one that offers an attractive guarantee of uptime and continual reassurance that no matter what, that data is backed up and replicated to death in the event of disaster.

So, there you have it; you’re all set to serve users and wash your hands of the whole infrastructure problem. In one relatively swift move you’ve shed the need for expensive, cumbersome hardware and can roll ahead with your business.

What could be more convenient? The resources simply scale along with demand, you pay for that demand as it happens, and outside of your maintenance of the core business applications, you can sit back and relax.

Right?

According to CTO and founder of the cloud management firm Rightscale, Thorsten von Eicken, a former colleague of AWS chief Werner Vogels,  this assumption is part of what sparked some of the trouble for those who had their business lifeblood in the cloud. Those guarantees and assurances backing any cloud computing outlet were as good as gold, weren’t they?

Following the initial Amazon Web Services outage, von Eicken wrote that although many customers were able to resort to a solid “Plan B” in the case of an outage, some were not adequately prepared for such an event. He claims that “because of Amazon’s reliability has been incredible, many users were not well-prepared leading to widespread outages. Additionally, some users got caught by unforeseen failure modes rendering their failure plans ineffective.”

So is this to say that if you experienced extensive, damaging outages and data loss the burden ultimately falls on you due to lack of disaster recovery plans? Not necessarily. However, what Von Eicken and other notable experts on the cloud movement suggest is that there might have been some over-confidence in Amazon’s ability to take care of everything. That aside, one has to wonder if even AWS as a whole had been a little too confident.

Now let’s go back to the scenario with you at helm from before. You’ve prepared yourself in any way you thought was necessary or appropriate given Amazon’s very solid track record of performance and uptime, the extensive service level agreements (SLAs) and their several success stories of mission-critical cloud operations.

Nonetheless, you woke up this morning (if you were lucky enough to sleep following the outage) to the following message that Amazon sent out to some of its customers today:

Hello,
 
A few days ago we sent you an email letting you know that we were working on recovering an inconsistent data snapshot of one or more of your Amazon EBS volumes.  We are very sorry, but ultimately our efforts to manually recover your volume were unsuccessful.  The hardware failed in such a way that we could not forensically restore the data.
 
What we were able to recover has been made available via a snapshot, although the data is in such a state that it may have little to no utility…
 
If you have no need for this snapshot, please delete it to avoid incurring storage charges.
 
We apologize for this volume loss and any impact to your business.

Sincerely,

Amazon Web Services, EBS Support

This message was produced and distributed by Amazon Web Services LLC, 410 Terry Avenue North, Seattle, Washington 98109-5210

You read this a few times. It doesn’t really sink in at first since after all, wasn’t everything protected under, like 50 layers of different protection and duplication efforts on Amazon’s side?

You see key phrases and nothing else… “failed in such a way that we could not forensically restore the data…”

“..data is in such a state that it may have little to no utlilty…”

And your favorite line of all after you’ve had a few moments to really think about it:

 “If you have no need for this snapshot, please delete it to avoid incurring storage charges…”

At these lines, you cock a brow over your left eye, which started twitching occasionally a few days ago in the wake of the outage and now appears to be possessed.

Seriously?… This apologetic letter about your complete and total loss of my data and you’re warning me that I am going to be incurring storage charges?

(insert select profanities here)

While certainly Amazon could have massaged this message, lending a spoonful of sugar for such medicine, this is not the only way that communication has played a significant role in the increasingly bad press the normally stable infrastructure provider is receiving today.

As von Eicken noted of the initial outage, Amazon receives an “F” for its ability to effectively communicate with users throughout the first signs of trouble. Many are now claiming that failure extends to the data loss matter at hand now. At a time like this, however, good communication is needed more than ever—it seems that either they have no idea of the extent and cause of the loss or they are afraid to let people know how bad it is and how far it extends. Either way, this does not bode well for the company as it has opened the door to a bumrush of negative speculation.

The problem is, in Amazon’s defense, this is the first major problem with multiple, compounding failures that it has ever experienced. There have been latencies and delays in select zones in the past but nothing on this level—not even close to it.

In addition to opening the door for widespread criticism and speculation, it has also allowed competitors (not to mention the six or more backup/recovery companies that are rapid-firing press, barely able to contain their excitement over this outage and loss) to claim dominance—to state that they are immune from such disasters.

But we all know, no one is immune. If Amazon suffers this type of problem, Rackspace could suffer the same issue. If Rackspace trips momentarily, so could Microsoft’s services.

And the point that no one brings up here is that this same problem—and worse—could happen in your very own data center if you chose not to hop on the cloud bandwagon. And it could be far more destructive and expensive.

We’ll spare Amazon for a moment and say that this was written in haste. After all, they have been raked over the coals since news of sporadic data destruction broke…and broke in a very public way.

Just as cloud computing hit the mainstream media outlets in a big way over the past year, so too did news of the problems that could arise when you push your core business into the ether.

The announcement today that data was not only lost or temporarily unavailable—that instead it was actually destroyed—certainly doesn’t bode well for the future of mission critical applications being exclusively hosted on cloud computing infrastructure. It is unfortunate for IaaS providers that this should happen right at that much-anticipated golden moment of growing comfort with cloud computing, but perhaps we should consider this event in light of a few points that major media outlets aren’t talking about.

By the way, the media that I’ve encountered this morning paints the picture of what happened with the Amazon cloud in some rather black and white terms. Mainstream outlets necessarily be condemned for this however—after all, it’s not easy to come up with live news that is approachable for the folks that just a few months ago learned that clouds were more than just the puff upstairs while still painting a picture of the outage that is technically dense and thus in better context.

With that said, the media really doesn’t have a leg to stand (nor does anyone else at this point) when Amazon has been (notoriously) uncommunicative about what actually happened. Aside from sporadic updates following the initial outage and some updates that were dense but not necessarily revealing, the public, not to mention the users whose data may have been chewed up, are in the dark.

If an infrastructure provider of any size has a problem like this, the first item on the agenda should be communication. This not only protects them from wild speculation across media outlets, but it also protects the very notion of the cloud as a reasonable solution for everyone—no matter who they choose to rent hardware from.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is built to run artificial intelligence (AI) workloads and, as Read more…

By Tiffany Trader

New Exascale System for Earth Simulation Introduced

April 23, 2018

After four years of development, the Energy Exascale Earth System Model (E3SM) will be unveiled today and released to the broader scientific community this month. The E3SM project is supported by the Department of Energy Read more…

By Staff

RSC Reports 500Tflops, Hot Water Cooled System Deployed at JINR

April 18, 2018

RSC, developer of supercomputers and advanced HPC systems based in Russia, today reported deployment of “the world's first 100% ‘hot water’ liquid cooled supercomputer” at Joint Institute for Nuclear Research (JI Read more…

By Staff

HPE Extreme Performance Solutions

Hybrid HPC is Speeding Time to Insight and Revolutionizing Medicine

High performance computing (HPC) is a key driver of success in many verticals today, and health and life science industries are extensively leveraging these capabilities. Read more…

New Device Spots Quantum Particle ‘Fingerprint’

April 18, 2018

Majorana particles have been observed by university researchers employing a device consisting of layers of magnetic insulators on a superconducting material. The advance opens the door to controlling the elusive particle Read more…

By George Leopold

AI-Focused ‘Genius’ Supercomputer Installed at KU Leuven

April 24, 2018

Hewlett Packard Enterprise has deployed a new approximately half-petaflops supercomputer, named Genius, at Flemish research university KU Leuven. The system is Read more…

By Tiffany Trader

Cray Rolls Out AMD-Based CS500; More to Follow?

April 18, 2018

Cray was the latest OEM to bring AMD back into the fold with introduction today of a CS500 option based on AMD’s Epyc processor line. The move follows Cray’ Read more…

By John Russell

IBM: Software Ecosystem for OpenPOWER is Ready for Prime Time

April 16, 2018

With key pieces of the IBM/OpenPOWER versus Intel/x86 gambit settling into place – e.g., the arrival of Power9 chips and Power9-based systems, hyperscaler sup Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Cloud-Readiness and Looking Beyond Application Scaling

April 11, 2018

There are two aspects to consider when determining if an application is suitable for running in the cloud. The first, which we will discuss here under the title Read more…

By Chris Downing

Transitioning from Big Data to Discovery: Data Management as a Keystone Analytics Strategy

April 9, 2018

The past 10-15 years has seen a stark rise in the density, size, and diversity of scientific data being generated in every scientific discipline in the world. Key among the sciences has been the explosion of laboratory technologies that generate large amounts of data in life-sciences and healthcare research. Large amounts of data are now being stored in very large storage name spaces, with little to no organization and a general unease about how to approach analyzing it. Read more…

By Ari Berman, BioTeam, Inc.

IBM Expands Quantum Computing Network

April 5, 2018

IBM is positioning itself as a first mover in establishing the era of commercial quantum computing. The company believes in order for quantum to work, taming qu Read more…

By Tiffany Trader

FY18 Budget & CORAL-2 – Exascale USA Continues to Move Ahead

April 2, 2018

It was not pretty. However, despite some twists and turns, the federal government’s Fiscal Year 2018 (FY18) budget is complete and ended with some very positi Read more…

By Alex R. Larzelere

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and Read more…

By Tiffany Trader

Researchers Measure Impact of ‘Meltdown’ and ‘Spectre’ Patches on HPC Workloads

January 17, 2018

Computer scientists from the Center for Computational Research, State University of New York (SUNY), University at Buffalo have examined the effect of Meltdown Read more…

By Tiffany Trader

How the Cloud Is Falling Short for HPC

March 15, 2018

The last couple of years have seen cloud computing gradually build some legitimacy within the HPC world, but still the HPC industry lies far behind enterprise I Read more…

By Chris Downing

Russian Nuclear Engineers Caught Cryptomining on Lab Supercomputer

February 12, 2018

Nuclear scientists working at the All-Russian Research Institute of Experimental Physics (RFNC-VNIIEF) have been arrested for using lab supercomputing resources to mine crypto-currency, according to a report in Russia’s Interfax News Agency. Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Deep Learning at 15 PFlops Enables Training for Extreme Weather Identification at Scale

March 19, 2018

Petaflop per second deep learning training performance on the NERSC (National Energy Research Scientific Computing Center) Cori supercomputer has given climate Read more…

By Rob Farber

Leading Solution Providers

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

Lenovo Unveils Warm Water Cooled ThinkSystem SD650 in Rampup to LRZ Install

February 22, 2018

This week Lenovo took the wraps off the ThinkSystem SD650 high-density server with third-generation direct water cooling technology developed in tandem with par Read more…

By Tiffany Trader

HPC and AI – Two Communities Same Future

January 25, 2018

According to Al Gara (Intel Fellow, Data Center Group), high performance computing and artificial intelligence will increasingly intertwine as we transition to Read more…

By Rob Farber

AI Cloud Competition Heats Up: Google’s TPUs, Amazon Building AI Chip

February 12, 2018

Competition in the white hot AI (and public cloud) market pits Google against Amazon this week, with Google offering AI hardware on its cloud platform intended Read more…

By Doug Black

New Blueprint for Converging HPC, Big Data

January 18, 2018

After five annual workshops on Big Data and Extreme-Scale Computing (BDEC), a group of international HPC heavyweights including Jack Dongarra (University of Te Read more…

By John Russell

US Plans $1.8 Billion Spend on DOE Exascale Supercomputing

April 11, 2018

On Monday, the United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion with th Read more…

By Tiffany Trader

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

Google Chases Quantum Supremacy with 72-Qubit Processor

March 7, 2018

Google pulled ahead of the pack this week in the race toward "quantum supremacy," with the introduction of a new 72-qubit quantum processor called Bristlecone. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This