10 Companies Spurring Enterprise Cloud Computing (Pt. I)

By Derrick Harris and Dennis Barker

November 18, 2008

There are those who doubt cloud computing will ever permeate large enterprise in any meaningful way. Don’t tell that to this group of vendors and providers, who have been supplying IT to the world’s largest enterprises for decades. If their robust cloud technology portfolios are paired with determined sales and marketing efforts, these companies could play a big part in making “the big switch” a reality.

On Friday, we’ll conclude the list with five Web-era companies whose momentum, customer bases, business models and sheer innovation should let them drive — or at least enable — widespread enterprise adoption of cloud computing and services.

HP

Don’t be fooled by terms like “scalable technology,” “technology as a service,” and “adaptable infrastructure” — HP does build clouds. Keeping in this vain, however, there is no “HP Cloud,” there’s only HP Adaptive Infrastructure as a Service (AIaaS).

As part of AIaaS, which is based on HP’s Flexible Computing Services, HP provides “pre-built application infrastructure … delivered through highly automated processes.” Customers can have access to HP-owned or managed datacenters to run their enterprise applications and be up on additional infrastructure “in a matter of hours,” HP promises. HP takes care of all applications, operations and infrastructure management.

Flexible Computing Services is HP’s technology bucket for application and infrastructure provisioning. It is HP’s utility computing division, and home of the Flexible Computing Club, where you can run a quick pilot to see if utility computing works for you.

HP also offers a set of “datacenter transformations” — tools and technologies from automation to virtualization — to convert old-school infrastructure into a cloud-based system. But HP also can deliver a cloud on the back of a truck. The Performance Optimized Data Center, or HP POD, is a 4,000-square-foot shipping container that can hold 22 19-inch 50u racks, or approximately 3,500 compute nodes, plus power modules and other backup hardware.

The Defense Information Systems Agency is putting HP’s cloud-building expertise to use in its Rapid Access Computing Environment (RACE). With this shared services cloud, Department of Defense users can provision the resources they need, on demand, to develop and test applications, and pay for those resources using a credit card. DISA’s implementation incorporates HP services including Operations Orchestration, Service Manager and ProLiant blades.

HP also has joined with Yahoo! and Intel to set up a network of datacenters they’re calling the Cloud Computing Test Bed. “The test bed simulates a real-life, global, Internet-scale environment, which gives researchers an unprecedented ability to test applications and measure the performance of infrastructures and services built to run on large-scale cloud systems,” Russ Daniels, chief technology officer of HP Cloud Services, told GRIDtoday when the program was launched in August.

“We have cloud domain expertise and dedicated HP resources who have been servicing these customers for years and understand the requirements of scale-out,” Ed Turkel, manager of HPC product marketing for the Scalable Computing & Infrastructure division, told this publication earlier this year. “HP is servicing scale-out and cloud-like computing environments for customers with tens of thousands of servers and tens of petabytes of storage installed around the world.”

Sun Microsystems

People have been asking aloud where Sun Microsystems is in all the talk around cloud computing. According to Sun’s Russ Castronovo (and made clear by Sun last week), the interest “has been noticed.”

What this ultimately means we don’t yet know, but the company who believes “The Network is the Computer” clearly has cloud aspirations. As reported last week, Sun has put to sleep its seminal Network.com offering and is retooling it for the cloud set. Some might question who the targeted user of the coming service will be, but conventional wisdom points to the enterprise. After all, while consumers might unknowingly take advantage of Sun’s hard work running Java apps on their phones, enterprise customers truly appreciate the breadth of networked computing knowledge going on within Sun walls. Research projects like Caroline and Hydrazine might offer some insight into the finished product, too.

About that breadth of knowledge: Sun’s portfolio includes cloud-standard database MySQL (“The database of choice these days is invariably MySQL,” says Castronovo); Solaris OS and its collection of features; Sun’s leading-edge servers and processors; and, perhaps most importantly for the sake of this discussion, virtualization capabilities with its xVM line. “If you’re looking at the technology pieces, Sun’s got the lion’s share of them,” touts Castronovo. “We haven’t said specifically what are we going to do in terms of the particulars … but most people, when they’re out there looking at and watching Sun, go ‘Here’s the company with the preponderance of the technology people are actually using right now.’”

Pared down to its core, the cloud computing ecosystem consists of vendors helping companies build clouds, cloud platform providers, and managed service providers trying to work a cloud angle into their offerings. Castronovo thinks Sun can both build clouds and provide a platform, but he won’t come out and admit that is the plan. His statement on the matter: “You can make an extraordinarily compelling case for things.”

Sun’s recently announced workforce reduction might seem a bad omen for the company’s future, but it also is analogous to the value proposition of cloud computing: saving money, streamlining operations and focusing on core strengths. Where skilled employees no doubt lost jobs in other areas, Sun has added resources and importance to its cloud initiative. Dave Douglas, president of the new Cloud Computing & Developer Platforms group, answers directly to CEO Jonathan Schwartz, and vice president and CTO Lew Tucker returns to Santa Clara after spearheading Salesforce.com’s App Exchange service.

2009 promises to be “extraordinarily interesting in terms of market dynamics,” believes Castronovo, as “[y]ou’re seeing lots and lots of customers — enterprise-oriented customers — who are seeing levels of efficiency that people are achieving in the cloud that they can’t touch. They want to know how they can do this.” It looks like Sun might show them the light.

AT&T

If one accepts that Internet performance and reliability are critical to any cloud computing deployment, one might assume a networking leader would be a formidable cloud services provider. If that networking leader happens to have a thriving enterprise hosting business as well, all the better.

Enter AT&T.

With 547,000 route miles of fiber (some of which currently support 40 Gbps) and 38-and-growing Internet datacenters dedicated to enterprise-class customers (including gaming giants like Konami, Microsoft and Sony Online Entertainment, with “World of Warcraft”), AT&T definitely has both the networking and computing know-how and investment (“tens and tens of billions”) to thrive is a cloud-centric world. Aside from straight infrastructure resources, AT&T has spent hundreds of millions of dollars, according to vice president of solutions sales Joe Weinman, developing “world-class” monitoring and management tools for constant uptime and resiliency, as well as for intrusion protection and other security capabilities.

Acknowledging that telcos as a class have certain inherent advantages over other cloud providers, Weinman believes AT&T stands alone as leader. His rationale is that nobody else excels across the board in terms of networking, computing, selling to enterprises, management/monitoring, mobile-end-point support, etc. “With the convergence of IT and networking, our play there as the world’s largest telecommunications provider and also as someone who is widely considered a leader, if not the leader, in global hosting for enterprise customers certainly provides a firm foundation on which to build these geographically dispersed, network-integrated, network-accessed hosted services, be they computing, storage or applications,” he says.

So, when the company officially entered the cloud fray in August with its Synaptic Hosting service, Weinman describes it as more an “evolution of a strategy to increasingly virtualize and utilize our capabilities” than a dramatic change. “This is not new to AT&T,” he explains. “AT&T has had a steady investment in what we now call cloud computing, but we’ve been in utility computing [and] software as a service — all of the enterprise-oriented capabilities that are very relevant to that space in terms of the infrastructure operations, scalability, global support, multi-language client support and enterprise sales forces, etc.”

Weinman points to enterprise-class contracts and SLAs as further sweetening AT&T’s value proposition to enterprise customers. Partnerships with leading enterprise software vendors don’ t hurt, either. Few, if any, cloud providers can claim “preferred partner” status with SAP, but AT&T can. While other cloud platforms cater to Web applications, hosted SAP instances will run optimally in an AT&T datacenter. If you consider managed telepresence, contact center services and voice networking to be cloud services, as Weinman does, AT&T’s cloud prowess appears even greater.

Ultimately, Weinman believes, enterprises crave a predictable, reliable, robust, familiar experience, and the company that can provide these things is the company that will bring enterprise customers to the cloud. Real CIOs are not going to bet their careers and Sarbanes-Oxley requirements on something that sounds good in a press release, he says. To date, he adds, AT&T’s cloud services and Synaptic Hosting offerings mirror the company’s overall success in terms of attracting blue-chip customers across the industry spectrum.

 IBM

It actually would be weird if IBM were not one of the companies leading enterprise users to cloud computing. The company has been building giant-scale, high-speed computing systems since what seems like ancient times, it has all the technologies needed to provide capabilities and infrastructure on-demand, and enterprise IT organizations are its long-time customer base.

IBM has tinkered around cloud computing for the past few years with internal and pilot projects, but last November made its cloud intentions clear with the announcement of Blue Cloud (which, if IBM were a Bond villain, would be a plan to dominate the globe with some kind of crazy “virtualization” scheme). The Blue Cloud initiative has spawned 13 cloud computing centers and 40 “innovation centers” around the world — including in Shanghai, Dublin, Brazil, India, South Korea, South Africa, The Netherlands and the United States — where clients can use IBM hardware, software and services, and tap the expertise of 200 dedicated scientists, to develop, test and experiment with cloud applications.

Blue Cloud encompasses IBM’s technologies and products that are essential to building a cloud: servers, storage, networking, virtualization, automation, service management tools, provisioning services, resiliency functions, on-demand capabilities and usage tracking, and security. This long roster includes: System p, x and z servers; Capacity on Demand, a set of technologies that enable users to readily increase compute and memory as needed and pay only for what’s used; and its Tivoli software, which brings request-driven provisioning, dynamic workload management and accounting capabilities.

“Blue Cloud is our effort to bring cloud computing to businesses and corporate datacenters,” says Dennis Quan, director of development in IBM’s Autonomic Computing division and the man who launched the IBM/Google cloud computing partnership in 2007. “The biggest thing is to help our clients understand the technologies and the business value proposition for adopting cloud computing, including virtualization, green IT, service management, better use of services and paying only for the services you use.”

“Many of our customers want the benefits of cloud computing but want those benefits within their own datacenters, so we have focused on the idea of the enterprise cloud, or the private cloud,” Quan says. “We expect that in the near future, our customers will take advantage of services on public and private clouds, but in an enterprise or private cloud, they will have very fine-grain control over policies.”

IBM says it has a four-part, all-encompassing cloud mission: selling its own cloud services portfolio; helping ISVs design and deliver and cloud services; helping clients integrate cloud services into their business; and providing cloud environments to businesses.

“It’s all about scalable services over the Internet,” Quan says. “Being able to dynamically allocate resources to run different workloads is a big motivation for enterprises and smaller businesses, also, to move toward a cloud model.”

Microsoft

Microsoft’s Azure cloud is still floating a bit in the distance, but it’s clear the company’s goal is to move enterprise applications from local premises to Web-based systems. The Azure Services Platform (details were just released in October) provides all the necessary ingredients: the underlying OS (Azure); the capabilities to build cloud apps using familiar Windows or open-source tools; and the big datacenters to run it all. The maestro managing the shuffling of resources, load balancing, deployments, upgrades and service lifecycle — all based on developer-defined requirements — is a new piece of technology called the Windows Azure Fabric Controller.

“The Azure Services Platform is for anyone who wants to build highly scalable and available cloud-based solutions,” says Steven Martin, senior director of marketing in Microsoft’s Connected Systems Division. The platform serves as “an on-ramp to the cloud by providing the tools and building blocks to combine existing on-premises systems with the cloud. Enterprise IT now has new options in writing Internet-connected applications, [and] adding new functionality to existing software.”

Microsoft expects the Azure Service Platform to appeal to large enterprises because it will allow them “to extend existing on-premises applications to the cloud to increase scalability, reliability and interoperability, while reducing costs and management overhead,” Martin says. “Azure enables developers to scale applications seamlessly, as demand rises and falls.”

Other components under the Azure umbrella include Microsoft SQL Services for distributed database services and reporting; .NET Services, which Microsoft says simplifies building cloud-based applications and implementing things like workflow and access control; Live Services for storing and sharing files and building “rich social applications”; and Microsoft SharePoint Services and Microsoft Dynamics CRM Services for business collaboration. Windows Azure Storage Service “is designed to be the lowest-cost, most efficient solution for large-scale data storage and retrieval in the cloud,” the company says.

What makes its cloud offering unique, Martin says, is that Microsoft’s “software-plus-services approach gives customers the flexibility to utilize their existing on-premise software in combination with new Web-based applications. They can take advantage of one, all, or a combination of technologies that make up the Azure Services Platform. … Additionally, the platform supports the REST, SOAP and XML protocols, and is built to work with any customer applications.” Azure eventually will support Ruby on Rails, Python and other outside tools.

Microsoft won’t say what pricing will look like, only that it will be, not surprisingly, based on resource consumption. The company expects the cloud will be ready in the later half of 2009. A “community technology preview” will give developers early access to the platform SDK. Whenever it goes live, though, Microsoft’s built-in customer base and millions of devotees worldwide should help bring the cloud and the enterprise closer together.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire