10 Reasons Why Telcos Will Dominate Enterprise Cloud Computing

By Joe Weinman

November 3, 2008

New-economy icons like Google and Amazon, with Internet-speed innovation in their DNA, have announced a dizzying array of cloud computing services, and InformationWeek quoted Google CEO Eric Schmidt as saying that with the exception of security requirements, “there’s not that much difference between the enterprise cloud and the consumer cloud.” If that’s true, it shouldn’t be too difficult for a Google or Amazon to leverage a strong consumer franchise and initial success servicing, say, Facebook application start-ups such as Animoto Productions, and rapidly penetrate blue chip Fortune 500 enterprises.

But old economy stalwarts like telcos have made cloud computing announcements, too. Consider, for example, AT&T’s recently announced Synaptic Hosting service, utilizing its 38 global Internet datacenters.

However, in a battle between a company born in the 19th century versus a nimble new-millennium innovator at the top of its game, is there really any question as to who ultimately will service the enterprise market’s cloud needs?

Well, actually, there is. Because while there may not be much of a difference between enterprise cloud services and consumer cloud services architecturally, there are dramatic differences between them in all other respects. These include not just security, but sales, service, support, scale, solutions, SLAs and so on. In fact, because the enterprise is so different, companies like Google have been trying different approaches to try to make headway, such as pursuing and extending partnerships with Salesforce.com and IBM. As a predictor of likely success, one need look no further than studies like Fingerprint, which shows that Gmail — even after several years on the market, the $625 million acquisition of Postini and a value price of $50 per user per year (or free, for some institutions and small businesses) — has only a minor share of the enterprise e-mail market.

To understand why telecommunications companies have such a strong franchise in this market space, it will be helpful to define what a cloud service is. I define it as a CLOUD: Common, Location-independent, Online Utility provisioned on-Demand. Common (i.e., shared) in that it multiplexes demand from multiple customers and applications into a common pool of resources. Location-independent, because it shouldn’t matter where you are or where the service is. Online, in the sense that it is accessible over a network, as well as “not down.” A utility because it provides value and offers usage-sensitive pricing. And on-demand in that the ability to provision capacity or service should be as fast as possible to meet variable demand requirements, enhancing business agility and providing capacity at the lowest total cost.

Under this definition, not only can computing be cloud-based, but so can be storage, security, audio conferencing, video conferencing, Web conferencing, messaging, collaboration, software as a service and so forth. In fact, cloud services have been around since well before today’s latest networked IT architectures and business models. Hotel chains are cloud services: they time- and space-division multiplex guests traveling as individuals and in groups, on vacation or business, into dynamically allocated units of capacity (rooms). They are location-independent, in that no matter what city you are in, you are likely to find a service node (a local hotel from the chain). They are online, accessible over wide-area highways and local-area hallways. They also are utilities (pay per room per night). And they are available on-demand (although reservations are recommended during peak season).

Large, global, integrated service providers (aka “telcos”), leaders in global networking and hosting, have a compelling value proposition to enterprise customers for such services, which inherently are net-sourced IT. Not only can such providers offer networking, hosting and application management services, they also can take advantage of the evolution of cloud services, creating an interoperable, integrated and “platformized” set of capabilities: compute and storage infrastructure; voice, data and video conferencing; and horizontal productivity-, enterprise- and vertical-focused applications.

In fact, such providers have 10 major strategic advantages in this market:

(1) Enterprise sales capability — Telcos have a long history of selling to enterprises as well as consumers. For example, AT&T had annual revenues of $119 billion in 2007 — more than either IBM or HP — and roughly half of those revenues come from businesses. Unlike their consumer or start-up counterparts, enterprise CIOs do not want to go online to initiate and manage a relationship. They want dedicated account teams collaborating closely with them and their teams for the long term, in many cases with a permanent on-site presence. Some might argue that there is a major business model transformation underway. After all, who needs an enterprise sales force when employees can just use their credit card to provision services?

This is unlikely to happen in the enterprise for three reasons. First, most enterprises have tight controls on purchasing that extend to $10 worth of business cards, much less buying online computing and storage capacity. Second, no corporate information security officer is likely to appreciate the idea of tens of thousands of employees purchasing cloud services and placing proprietary corporate data willy-nilly across providers and platforms. Third, enterprise IT shops already have experienced the chaos and hidden costs associated with loss of control of applications, desktop images, and foundation architecture in departmental computing and rich desktop environments, and thus are not likely to support a model of individual purchases of cloud capacity and services. If the enterprise wants to avail itself of the benefits of the cloud, credit card purchasing is not the way to go.

(2) Lifecycle service and support — It’s not just sales, but also after-sales service and support, including: lifecycle management teams ensuring successful service delivery 24/7; advanced tooling for service monitoring and management; portals for network and application performance, usage monitoring and configuration and provisioning changes; and even e-bonding between enterprise systems and service provider systems.

(3) Reliable operations at scale — Rather than offering services that still remain in “Preview Release” or permanent “Beta” purgatory after many years to avoid any implied service reliability or feature stability commitments, service providers go through a comprehensive suite of pre-launch interoperability, certification, and scalability engineering and testing. In fact, telcos are used to engineering services for four or five nines of availability, even as they scale up to tens of millions of customers. This reliability at scale is in telcos’ DNA and service culture, as well as in regulatory requirements. Imagine a trauma victim calling 911 and getting a pre-recorded message saying, “Your call did not go through — but, hey, we’re still in beta.” It isn’t clear that a new economy culture of random innovation is compatible with a culture of continuous delivery of the same service to tens of millions of customers day after day.

(4) SLAs with financial penalties — Not only won’t enterprises accept “Well, after all, it’s still in beta” as an excuse for service outages, they demand meaningful SLAs (service level agreements) with clear metrics for evaluating achievement of those SLAs, backed up by monitoring and management systems, and financial penalties such as credits or refunds if service levels aren’t met. A “free” or low-cost service with questionable delivery quality is about as attractive to a CIO as an offer of free neurosurgery from someone who just skimmed a blog on how to do it in three easy steps.

(5) Full enterprise solutions portfolio — Cloud computing services don’t exist in a vacuum; many other services may be procured in conjunction with them, either due to technical architecture requirements or due to contracting benefits, such as discounts for total spend. Related services such as network access and transport, MPLS VPNs for backhauling to the enterprise datacenter, application management, global load balancing, asymmetric Web acceleration, network-based firewalls and other network-based security services, content delivery, Voice over IP, Video over IP, managed messaging, Web conferencing and remote access can offer synergies when combined with cloud computing and storage.

(6) Integrated hosting and network services — This has real benefits in terms of cost and performance. It generates cost advantages in a number of ways. First, having hosting facilities on net — that is, in the same locations as core network backbone switching and routing facilities — eliminates expenses associated with building additional access facilities to reach a third-party datacenter. Integrated providers also can access network facilities at cost, rather than at market prices. And larger providers should be able to achieve more compelling economies of scale. Having hosting facilities on net also means better performance by reducing router hops and associated physical propagation delays.

(7) Vendor independence — Service providers tend to be software and hardware vendor-agnostic. The reason for this is that their broad customer bases have wide ranges of requirements and preferences, and service providers are strategically intent on reaching as wide a market as possible. Consequently, lock-in to a specific storage, server, operating system, hypervisor, middleware, database or application vendor would be self-defeating by limiting market penetration. This contrasts with some of the existing players, who mostly seem to have at least some proprietary elements to their platforms.

(8) Global footprint — It’s not news that today’s enterprises have gone global. Whether it’s a global base of employees, customers, supply chain partners, offshore contact centers or skill base for innovation, reach and footprint are critical. Large, integrated global service providers have the capability to provide services locally and consistently virtually anywhere in the world to support today’s increasingly interactive applications with proximate infrastructures that reduce response time — and with the sales and support resources to directly engage with regional or local leadership, or corporate executives headquartered anywhere from Shanghai to Dubai, Bangalore to Brussels, or Sydney to Sao Paulo.

(9) Financial stability and market commitment — In today’s tumultuous economic environment, enterprises are more focused than ever on the financial stability, brand and business viability of service providers providing key parts of their infrastructures. Commitment to hosting and cloud computing as part of their provider’s core business is important, as opposed to cloud services being a potentially temporary excursion from different core businesses such as online retailing or advertising. Over the last few years, high and rising stock prices have permitted some new economy players substantial flexibility in capital investments, but recent drops of fifty or sixty percent may slow such adventurism for the foreseeable future.

(10) Technologies are easier to replicate than relationships and operations — Don’t the famously highly paid developers at the new economy companies have an edge in creating new technologies such as automated provisioning that enable cloud services to rapidly scale up and down? If they do — which is arguable — it isn’t sustainable. Such technologies have been around for years from companies as small as BladeLogic and as large as IBM (e.g., Tivoli Provisioning Manager), with variations such as VMware’s vCenter and VMotion fitting into the mix. For every highly paid developer at an online bookseller, there is a highly motivated developer at a start-up or large global software firm, developing software tools for others, like integrated service providers, to incorporate into their tooling and management platforms. Even Animoto, the poster child for non-consumer use of cloud computing services, leveraged a third party, RightScale, to manage dynamic allocation of these services. Service providers also can choose best-in-class capabilities and focus on integration. Much harder to replicate are global networks that have been built for literally hundreds of billions of dollars of investment, and the experienced skill base, long-term enterprise customer relationships, management tools, support organizations, service culture, and local access and regulatory relationships that enable services to be delivered successfully at scale.

The different players in the emerging cloud computing market have different starting points, different current strategic advantages and different challenges. The trick to handicapping this race is to focus on fundamentals: what advantages are duplicated easily, and which are sustainable. Ultimately, the winners in selling to the enterprise will have to address enterprise requirements and competitive strategy issues identified above. Large, global, integrated service providers, who are not just telecommunications companies, but also hosting and applications management companies, just might have the edge in selling and delivering cloud services to the demanding enterprise CIO.


Joe Weinman is Strategic Solutions Sales VP at AT&T. The views expressed herein are his own and do not necessarily reflect the views of AT&T.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

At SC19: What Is UrgentHPC and Why Is It Needed?

November 14, 2019

The UrgentHPC workshop, taking place Sunday (Nov. 17) at SC19, is focused on using HPC and real-time data for urgent decision making in response to disasters such as wildfires, flooding, health emergencies, and accidents. We chat with organizer Nick Brown, research fellow at EPCC, University of Edinburgh, to learn more. Read more…

By Tiffany Trader

China’s Tencent Server Design Will Use AMD Rome

November 13, 2019

Tencent, the Chinese cloud giant, said it would use AMD’s newest Epyc processor in its internally-designed server. The design win adds further momentum to AMD’s bid to erode rival Intel Corp.’s dominance of the glo Read more…

By George Leopold

NCSA Industry Conference Recap – Part 1

November 13, 2019

Industry Program Director Brendan McGinty welcomed guests to the annual National Center for Supercomputing Applications (NCSA) Industry Conference, October 8-10, on the University of Illinois campus in Urbana (UIUC). One hundred seventy from 40 organizations attended the invitation-only, two-day event. Read more…

By Elizabeth Leake, STEM-Trek

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing components with Intel Xeon, AMD Epyc, IBM Power, and Arm server ch Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Help HPC Work Smarter and Accelerate Time to Insight

 

[Attend the IBM LSF & HPC User Group Meeting at SC19 in Denver on November 19]

To recklessly misquote Jane Austen, it is a truth, universally acknowledged, that a company in possession of a highly complex problem must be in want of a massive technical computing cluster. Read more…

SIA Recognizes Robert Dennard with 2019 Noyce Award

November 12, 2019

If you don’t know what Dennard Scaling is, the chances are strong you don’t labor in electronics. Robert Dennard, longtime IBM researcher, inventor of the DRAM and the fellow for whom Dennard Scaling was named, is th Read more…

By John Russell

Cray, Fujitsu Both Bringing Fujitsu A64FX-based Supercomputers to Market in 2020

November 12, 2019

The number of top-tier HPC systems makers has shrunk due to a steady march of M&A activity, but there is increased diversity and choice of processing compon Read more…

By Tiffany Trader

Intel AI Summit: New ‘Keem Bay’ Edge VPU, AI Product Roadmap

November 12, 2019

At its AI Summit today in San Francisco, Intel touted a raft of AI training and inference hardware for deployments ranging from cloud to edge and designed to support organizations at various points of their AI journeys. The company revealed its Movidius Myriad Vision Processing Unit (VPU)... Read more…

By Doug Black

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researchers of Europe’s NEXTGenIO project, an initiative funded by the European Commission’s Horizon 2020 program to explore this new... Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed instances for storage workloads. The fourth-generation Azure D-series and E-series virtual machines previewed at the Rome launch in August are now generally available. Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

Cerebras to Supply DOE with Wafer-Scale AI Supercomputing Technology

September 17, 2019

Cerebras Systems, which debuted its wafer-scale AI silicon at Hot Chips last month, has entered into a multi-year partnership with Argonne National Laboratory and Lawrence Livermore National Laboratory as part of a larger collaboration with the U.S. Department of Energy... Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This