EMC Suddenly Cedes the Clouds

By Nicole Hemsoth

July 4, 2010

There are cool ways to get a certain message across…even when that message is a painful one.

 

 

And you know, there are not-so-cool ways to do the exact same thing. 

Perhaps EMC, who suddenly announced they were not allowing customers to “leverage” their services–and to get out asap–should have instructed their web designers to remove the banner that proclaims in peaceful hues of lime green and turquoise, “Leverage the POWER of the cloud” considering that it is followed by the announcement:

Dear Atmos Online Customers,

We are no longer planning to support production usage of Atmos Online. Going forward, Atmos Online will remain available strictly as a development environment to foster adoption of Atmos technology and Atmos cloud services offered by our continuously expanding range of partners who offer production services”

To summarize the rest: pack up your stuff and go. Because there’s no support for Atmos. We are not providing any SLA or availability guarantees so hurry—migrate anything that matters to one of our partners. Like now. Yes, now.

Oh and…

“You are welcome to continue leveraging Atmos Online for development purposes as needed. These changes also do not affect our commitment to your success”

Well, now that you put it that way, EMC, here just before the fourth of July when your users were all nice and prepared for hot dogs and fireworks and now will instead be engaged in frantic migration attempts with minimal support, that makes it all seem a little better. Really.

Perspective on EMC’s Decision—and the Implications

There is no question that this is fodder for arguments against clouds as reliable and cost-effective paradigm shifts for IT since from here, it certainly looks like there has been no warning. If it was any other company, perhaps one that wasn’t as well known, the fear would be that they would disappear altogether–data in tow.

Info-Tech Research Group put EMC in their ranks of Rising Stars in March because, according to Info-Tech’s Research Analyst Laura Hansen-Kohls, at the time they seemed to have great promise. In an interview on Friday with HPC in the Cloud, Hansen-Kohls stated, “when we spoke with them before they were named a rising star, they were a major storage vendor so they had the market share edge that was on par with someone like Amazon would have, but also, at least when we spoke to them, they seemed to be making a significant investment in the cloud even though it was clear they didn’t have a defined strategy. Still, we felt that once they got the marketing push underway and communicated more clearly they could have competed with Amazon but from what we understand now, the competition with their partners was too direct so they decided to exit.”

While this is a perfectly valid and easy to understand reason for EMC’s sudden decision to pull all support and leave customers hanging without notice, it seems like there has to be something else going on here—what could have caused a company that has spent a significant amount of money and effort getting the word out about Atmos to abandon it in a way that leaves me looking for stronger phrases than “rudely abrupt” if there are any. Some have suggested that the costs suddenly became too heavy to bear all of a sudden and others have contended that agreements with their partners led to an immediate arrangement for them to stop competing or suffer the consequences. No one from EMC has responded to my queries and for those who did receive responses, they don’t go far beyond the cryptic letter on their website.

An Important Reminder

Hansen-Kohls suggests that this news does not bode well for the long-term perception of clouds, especially for smaller enterprises. She stated, “When Amazon got into cloud, for example, they did it because they had all this excess capacity and they could rent it out for a price without any data center or other major capital investment—when you’ve got vendors like EMC, they might not have that capacity just sitting around to sell so it could be that their investment was costing more than they were actually making—this is conjecture—it could be the revenue stream in wasn’t enough to offset the cost.”

In other words, it is critical to evaluate the business model of any cloud vendor before taking the plunge—not just what their existing SLAs seem to represent. If it isn’t clear that they have the resources to begin with and those resources are being culled sustainably, then it is not a good idea. Period.

When I asked Laura Hansen-Kohls whether or not there will likely be other companies with similar offerings jumping ship and taking the customer life rafts with them, she paused for quite some time before responding (although to be fair, I did catch her off guard). She replied, “Most of the other vendors we’ve spoken with GoGrid and Joyent for example, have a clear vision of what they want to achieve and have a plan to get it. Joyent will admit to this readily, but they’ve also suffered from an unclear message. They’re going through a rebranding process and are starting to pick up the pace so I think they’re aware of some of the misconceptions that float around in the cloud and the source of confusion that is caused by the marketing terminology since marketing ran away with the term before the tech was refined. Joyent could have had a similar problem to EMC but they’re picking up fast enough and gaining ground.”

The main message here, to quote Hansen-Kolhls, is that “knowing your risk tolerance when you go into the cloud is critical. If you’re putting data in the cloud you can’t live without, such as in a case like this, you have to know what your risk tolerance is for losing that data for a certain amount of time. If there are compliance restrictions, for instance, they can’t tolerate this at all—this is a real kick in the argument against moving into the cloud. EMC is not making any promises how long they’ll keep the data there.”

Like many others who read the news, which was so thoughtlessly timed with the closing bell on a pre-holiday Friday in the United States, Hansen-Kohl’s response to the customer email cited above was, “ I read it and I was shocked. No SLA, no production–get your data out because there’s no guarantee it will be here. It so sudden—there was no forewarning, thus no giving anyone time to transition—the enterprises who move to the cloud need a contingency plan so they can get their data out when something like this happens.
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

Weekly Twitter Roundup (Feb. 23, 2017)

February 23, 2017

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPE Server Shows Low Latency on STAC-N1 Test

February 22, 2017

The performance of trade and match servers can be a critical differentiator for financial trading houses. Read more…

By John Russell

HPC Financial Update (Feb. 2017)

February 22, 2017

In this recurring feature, we’ll provide you with financial highlights from companies in the HPC industry. Check back in regularly for an updated list with the most pertinent fiscal information. Read more…

By Thomas Ayres

HPE Extreme Performance Solutions

O&G Companies Create Value with High Performance Remote Visualization

Today’s oil and gas (O&G) companies are striving to process datasets that have become not only tremendously large, but extremely complex. And the larger that data becomes, the harder it is to move and analyze it – particularly with a workforce that could be distributed between drilling sites, offshore rigs, and remote offices. Read more…

Rethinking HPC Platforms for ‘Second Gen’ Applications

February 22, 2017

Just what constitutes HPC and how best to support it is a keen topic currently. Read more…

By John Russell

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

ExxonMobil, NCSA, Cray Scale Reservoir Simulation to 700,000+ Processors

February 17, 2017

In a scaling breakthrough for oil and gas discovery, ExxonMobil geoscientists report they have harnessed the power of 717,000 processors – the equivalent of 22,000 32-processor computers – to run complex oil and gas reservoir simulation models. Read more…

By Doug Black

Advancing Modular Supercomputing with DEEP and DEEP-ER Architectures

February 24, 2017

Knowing that the jump to exascale will require novel architectural approaches capable of delivering dramatic efficiency and performance gains, researchers around the world are hard at work on next-generation HPC systems. Read more…

By Sean Thielen

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDC: Will the Real Exascale Race Please Stand Up?

February 21, 2017

So the exascale race is on. And lots of organizations are in the pack. Government announcements from the US, China, India, Japan, and the EU indicate that they are working hard to make it happen – some sooner, some later. Read more…

By Bob Sorensen, IDC

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Drug Developers Use Google Cloud HPC in the Fight Against ALS

February 16, 2017

Within the haystack of a lethal disease such as ALS (amyotrophic lateral sclerosis / Lou Gehrig’s Disease) there exists, somewhere, the needle that will pierce this therapy-resistant affliction. Read more…

By Doug Black

Azure Edges AWS in Linpack Benchmark Study

February 15, 2017

The “when will clouds be ready for HPC” question has ebbed and flowed for years. Read more…

By John Russell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Leading Solution Providers

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

Dell Knights Landing Machine Sets New STAC Records

November 2, 2016

The Securities Technology Analysis Center, commonly known as STAC, has released a new report characterizing the performance of the Knight Landing-based Dell PowerEdge C6320p server on the STAC-A2 benchmarking suite, widely used by the financial services industry to test and evaluate computing platforms. The Dell machine has set new records for both the baseline Greeks benchmark and the large Greeks benchmark. Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

What Knights Landing Is Not

June 18, 2016

As we get ready to launch the newest member of the Intel Xeon Phi family, code named Knights Landing, it is natural that there be some questions and potentially some confusion. Read more…

By James Reinders, Intel

  • arrow
  • Click Here for More Headlines
  • arrow
Share This