10 Reasons Why Telcos Will Dominate Enterprise Cloud Computing

By Joe Weinman

November 3, 2008

New-economy icons like Google and Amazon, with Internet-speed innovation in their DNA, have announced a dizzying array of cloud computing services, and InformationWeek quoted Google CEO Eric Schmidt as saying that with the exception of security requirements, “there’s not that much difference between the enterprise cloud and the consumer cloud.” If that’s true, it shouldn’t be too difficult for a Google or Amazon to leverage a strong consumer franchise and initial success servicing, say, Facebook application start-ups such as Animoto Productions, and rapidly penetrate blue chip Fortune 500 enterprises.

But old economy stalwarts like telcos have made cloud computing announcements, too. Consider, for example, AT&T’s recently announced Synaptic Hosting service, utilizing its 38 global Internet datacenters.

However, in a battle between a company born in the 19th century versus a nimble new-millennium innovator at the top of its game, is there really any question as to who ultimately will service the enterprise market’s cloud needs?

Well, actually, there is. Because while there may not be much of a difference between enterprise cloud services and consumer cloud services architecturally, there are dramatic differences between them in all other respects. These include not just security, but sales, service, support, scale, solutions, SLAs and so on. In fact, because the enterprise is so different, companies like Google have been trying different approaches to try to make headway, such as pursuing and extending partnerships with Salesforce.com and IBM. As a predictor of likely success, one need look no further than studies like Fingerprint, which shows that Gmail — even after several years on the market, the $625 million acquisition of Postini and a value price of $50 per user per year (or free, for some institutions and small businesses) — has only a minor share of the enterprise e-mail market.

To understand why telecommunications companies have such a strong franchise in this market space, it will be helpful to define what a cloud service is. I define it as a CLOUD: Common, Location-independent, Online Utility provisioned on-Demand. Common (i.e., shared) in that it multiplexes demand from multiple customers and applications into a common pool of resources. Location-independent, because it shouldn’t matter where you are or where the service is. Online, in the sense that it is accessible over a network, as well as “not down.” A utility because it provides value and offers usage-sensitive pricing. And on-demand in that the ability to provision capacity or service should be as fast as possible to meet variable demand requirements, enhancing business agility and providing capacity at the lowest total cost.

Under this definition, not only can computing be cloud-based, but so can be storage, security, audio conferencing, video conferencing, Web conferencing, messaging, collaboration, software as a service and so forth. In fact, cloud services have been around since well before today’s latest networked IT architectures and business models. Hotel chains are cloud services: they time- and space-division multiplex guests traveling as individuals and in groups, on vacation or business, into dynamically allocated units of capacity (rooms). They are location-independent, in that no matter what city you are in, you are likely to find a service node (a local hotel from the chain). They are online, accessible over wide-area highways and local-area hallways. They also are utilities (pay per room per night). And they are available on-demand (although reservations are recommended during peak season).

Large, global, integrated service providers (aka “telcos”), leaders in global networking and hosting, have a compelling value proposition to enterprise customers for such services, which inherently are net-sourced IT. Not only can such providers offer networking, hosting and application management services, they also can take advantage of the evolution of cloud services, creating an interoperable, integrated and “platformized” set of capabilities: compute and storage infrastructure; voice, data and video conferencing; and horizontal productivity-, enterprise- and vertical-focused applications.

In fact, such providers have 10 major strategic advantages in this market:

(1) Enterprise sales capability — Telcos have a long history of selling to enterprises as well as consumers. For example, AT&T had annual revenues of $119 billion in 2007 — more than either IBM or HP — and roughly half of those revenues come from businesses. Unlike their consumer or start-up counterparts, enterprise CIOs do not want to go online to initiate and manage a relationship. They want dedicated account teams collaborating closely with them and their teams for the long term, in many cases with a permanent on-site presence. Some might argue that there is a major business model transformation underway. After all, who needs an enterprise sales force when employees can just use their credit card to provision services?

This is unlikely to happen in the enterprise for three reasons. First, most enterprises have tight controls on purchasing that extend to $10 worth of business cards, much less buying online computing and storage capacity. Second, no corporate information security officer is likely to appreciate the idea of tens of thousands of employees purchasing cloud services and placing proprietary corporate data willy-nilly across providers and platforms. Third, enterprise IT shops already have experienced the chaos and hidden costs associated with loss of control of applications, desktop images, and foundation architecture in departmental computing and rich desktop environments, and thus are not likely to support a model of individual purchases of cloud capacity and services. If the enterprise wants to avail itself of the benefits of the cloud, credit card purchasing is not the way to go.

(2) Lifecycle service and support — It’s not just sales, but also after-sales service and support, including: lifecycle management teams ensuring successful service delivery 24/7; advanced tooling for service monitoring and management; portals for network and application performance, usage monitoring and configuration and provisioning changes; and even e-bonding between enterprise systems and service provider systems.

(3) Reliable operations at scale — Rather than offering services that still remain in “Preview Release” or permanent “Beta” purgatory after many years to avoid any implied service reliability or feature stability commitments, service providers go through a comprehensive suite of pre-launch interoperability, certification, and scalability engineering and testing. In fact, telcos are used to engineering services for four or five nines of availability, even as they scale up to tens of millions of customers. This reliability at scale is in telcos’ DNA and service culture, as well as in regulatory requirements. Imagine a trauma victim calling 911 and getting a pre-recorded message saying, “Your call did not go through — but, hey, we’re still in beta.” It isn’t clear that a new economy culture of random innovation is compatible with a culture of continuous delivery of the same service to tens of millions of customers day after day.

(4) SLAs with financial penalties — Not only won’t enterprises accept “Well, after all, it’s still in beta” as an excuse for service outages, they demand meaningful SLAs (service level agreements) with clear metrics for evaluating achievement of those SLAs, backed up by monitoring and management systems, and financial penalties such as credits or refunds if service levels aren’t met. A “free” or low-cost service with questionable delivery quality is about as attractive to a CIO as an offer of free neurosurgery from someone who just skimmed a blog on how to do it in three easy steps.

(5) Full enterprise solutions portfolio — Cloud computing services don’t exist in a vacuum; many other services may be procured in conjunction with them, either due to technical architecture requirements or due to contracting benefits, such as discounts for total spend. Related services such as network access and transport, MPLS VPNs for backhauling to the enterprise datacenter, application management, global load balancing, asymmetric Web acceleration, network-based firewalls and other network-based security services, content delivery, Voice over IP, Video over IP, managed messaging, Web conferencing and remote access can offer synergies when combined with cloud computing and storage.

(6) Integrated hosting and network services — This has real benefits in terms of cost and performance. It generates cost advantages in a number of ways. First, having hosting facilities on net — that is, in the same locations as core network backbone switching and routing facilities — eliminates expenses associated with building additional access facilities to reach a third-party datacenter. Integrated providers also can access network facilities at cost, rather than at market prices. And larger providers should be able to achieve more compelling economies of scale. Having hosting facilities on net also means better performance by reducing router hops and associated physical propagation delays.

(7) Vendor independence — Service providers tend to be software and hardware vendor-agnostic. The reason for this is that their broad customer bases have wide ranges of requirements and preferences, and service providers are strategically intent on reaching as wide a market as possible. Consequently, lock-in to a specific storage, server, operating system, hypervisor, middleware, database or application vendor would be self-defeating by limiting market penetration. This contrasts with some of the existing players, who mostly seem to have at least some proprietary elements to their platforms.

(8) Global footprint — It’s not news that today’s enterprises have gone global. Whether it’s a global base of employees, customers, supply chain partners, offshore contact centers or skill base for innovation, reach and footprint are critical. Large, integrated global service providers have the capability to provide services locally and consistently virtually anywhere in the world to support today’s increasingly interactive applications with proximate infrastructures that reduce response time — and with the sales and support resources to directly engage with regional or local leadership, or corporate executives headquartered anywhere from Shanghai to Dubai, Bangalore to Brussels, or Sydney to Sao Paulo.

(9) Financial stability and market commitment — In today’s tumultuous economic environment, enterprises are more focused than ever on the financial stability, brand and business viability of service providers providing key parts of their infrastructures. Commitment to hosting and cloud computing as part of their provider’s core business is important, as opposed to cloud services being a potentially temporary excursion from different core businesses such as online retailing or advertising. Over the last few years, high and rising stock prices have permitted some new economy players substantial flexibility in capital investments, but recent drops of fifty or sixty percent may slow such adventurism for the foreseeable future.

(10) Technologies are easier to replicate than relationships and operations — Don’t the famously highly paid developers at the new economy companies have an edge in creating new technologies such as automated provisioning that enable cloud services to rapidly scale up and down? If they do — which is arguable — it isn’t sustainable. Such technologies have been around for years from companies as small as BladeLogic and as large as IBM (e.g., Tivoli Provisioning Manager), with variations such as VMware’s vCenter and VMotion fitting into the mix. For every highly paid developer at an online bookseller, there is a highly motivated developer at a start-up or large global software firm, developing software tools for others, like integrated service providers, to incorporate into their tooling and management platforms. Even Animoto, the poster child for non-consumer use of cloud computing services, leveraged a third party, RightScale, to manage dynamic allocation of these services. Service providers also can choose best-in-class capabilities and focus on integration. Much harder to replicate are global networks that have been built for literally hundreds of billions of dollars of investment, and the experienced skill base, long-term enterprise customer relationships, management tools, support organizations, service culture, and local access and regulatory relationships that enable services to be delivered successfully at scale.

The different players in the emerging cloud computing market have different starting points, different current strategic advantages and different challenges. The trick to handicapping this race is to focus on fundamentals: what advantages are duplicated easily, and which are sustainable. Ultimately, the winners in selling to the enterprise will have to address enterprise requirements and competitive strategy issues identified above. Large, global, integrated service providers, who are not just telecommunications companies, but also hosting and applications management companies, just might have the edge in selling and delivering cloud services to the demanding enterprise CIO.


Joe Weinman is Strategic Solutions Sales VP at AT&T. The views expressed herein are his own and do not necessarily reflect the views of AT&T.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire