A Crisis of Definitions?

By Nicole Hemsoth

April 18, 2010

We have heard all about “cloud technology” in countless articles, but when we get right down to it, what we call “cloud technology” is actually a collection of technologies that are driven by methodologies and styles of adapting technologies to suit the mission-critical demands of enterprise HPC.

There is little consensus about what the exact nature of cloud computing is in enterprise HPC — at least in terms of what so many in the community are calling it. Some suggest it is a technology of its own while others state that cloud is merely a style of computing. Others, including Addison Snell of Intersect 360 Research, take the concept of “style” or “forms” of computing a bit further and call cloud a “methodology” of computing. While on the surface it may seem there is little difference between these terms, with growing adoption it is important that there is some consistency or consensus. To arrive at a sense of cloud as a technology, methodology of computing, or a “new” style of computing, the question was posed to a handful of members of the enterprise and HPC community.

Cloud as a Methodology of Computing

Wolfgang Gentzsch, Advisor to the EU project Distributed European Infrastructure for Supercomputing Applications (DEISA) and member of the Board of Directors of the Open Grid Forum, suggests that cloud computing is not a distinct technology, but rather is a combination of technologies that have evolved over the course of decades to create something far more associated with a methodology versus a style. Gentzsch states:

Cloud computing is many things to many people. If, however, you look closer at its evolution, from terminal-mainframe, to client-server, to client-grid, and finally to client-cloud, (perhaps to terminal-cloud, or PDA-cloud, next), it is the logical result of a 20-years effort to make computing more efficient and more user friendly — from self-plumbing to utility.

In my opinion, cloud comes closest to being a methodology, i.e., “a set of methods, practices, procedures, and rules” defined and applied by the IT community to provide user-friendly access to efficient computing, which are, high-level: computing as a utility; pay-per-use billing; access over the internet — anytime, anywhere; scaling resources; Opex instead of Capex; etc.

To a lesser extent, this has to do with a specific technology which you would call cloud technology; the technological bits and pieces you need to build and use a cloud have been developed before the term cloud computing was invented, and are thus independent of cloud. In fact, already in the 90s, the idea of ASP was purest SaaS, and almost all ingredients were already there: the Internet, secure portals, server farms, ASP-enabled applications, and software companies willing to implement. But all these components were still inefficient: server farms didn’t scale, bandwidth was low, portals were clumsy, and most importantly, users weren’t mentally ready for ASP.

Today, all the technology is on the table to build a most efficient, scalable, flexible, dynamic cloud. However, still, the most severe roadblocks to cloud adoption today (are the same as with ASPs and grids and) come from mental barriers and considerations like privacy, competitiveness, and intellectual property issues. (See a more complete listing of roadblocks in my most recent blog.)

So, in my opinion, cloud computing is a methodology for utility computing, enabled by different modern technologies, supporting a new style of computing, i.e., computing via the Internet.

Echoing this view of cloud as a methodology of computing versus a unique set of technologies (albeit using a different approach), Bruce Maches, former director of information technology for Pfizer’s R&D division and current CIO for BRMaches & Associates, stated:

There are arguments that can be made on both sides (yes or no) for all three of the possibilities. I would argue no, cloud is not a technology in and of itself. Cloud computing in the natural evolution for the use of the infrastructure built around and supporting the internet and the services it provides. There is no one single technology you can point to and say ‘that is cloud computing.’ Certainly there are many computing advances that enable the leveraging of hardware and software resources over the internet and allow companies to avoid building out their own expensive infrastructure. To try to lump them into one technology called cloud just doesn’t quite work.

Is cloud a style of computing? This is a harder one to define as a style can be a manner or technique. It would be difficult to come up with definitive arguments to say either yes or no. Is it a methodology? Is it a discipline on how computing resources, regardless of source, are appropriately and efficiently applied to solve problems? Are there underlying governance principles that can be used to determine if cloud computing is the right answer to meet a particular need?

I would make the argument that the application of cloud computing is the overall gestalt of using appropriate methodologies to determine when to apply the ‘style’ of cloud computing all of which is supported by the underlying computing and networking technologies.

Enterprise and HPC Cloud as a (Not So) New Style of Computing

Weisong Shi is an Associate Professor of Computer Science with Wayne State University, where he directs the Mobile and Internet Systems Laboratory (MIST) and follows research interests in computer systems and mobile and cloud computing, Shi, who co-authored this article, suggests that from the perspective of end users, cloud is a “new” style of computing, stating:

To discuss this, we need to take a look at the history of computing. I think there are three phases of computing in the last 60 years. In the first phase (1960-1980), also known the mainframe era, the common setting was a mainframe with tens of dummy terminals. If a user wanted to use a computer, he or she would have to go to a computer room and submit the job. The advantage of this style is that end users didn’t need to maintain the computer, e.g., installing software and upgrading drivers, and so on, but at the cost of flexibility. In the second phase (1980-2005), also known as the PC era, each user had his or her own computer — this is what PC stands for, personal computer. The biggest advantage of this computing style is the flexibility it brought to us. End users can do computing wherever they want, and don’t have to go to a dedicated computer room. We have witnessed the success of this model since the inception of personal computers. However, as the fast penetration of computers spreads, we envision that the computer is more and more like an appliance in our home, and end users want to treat a computer the same way they treat a TV or refrigerator. Apparently, the PC model does not work since it requires end users to install and maintain the computers by themselves, also the PC era is not very well designed for content sharing among multiple users, since the network is treated as the first-class entity at this phase.

The fast growth of Internet services, e.g., Google documents, Youtube, etc., together with the wide deployment of 3G/4G technologies, stimulate another wave of revolution for the way we use computers, i.e., cloud computing. I think we are entering the cloud computing era, where end users will enjoy the flexibility brought by mobile Internet devices (MID) and the ease of management/sharing of their content, i.e., email, documents, photos, videos, and so on, brought by cloud computing. With cloud computing, we will realize the vision of “Computing for the Masses” in the near future.

From the technology point of view, I don’t think cloud computing introduces too many new challenges and new ideas. What we need to do in these systems is use the existing techniques more efficiently. For example, the Dynamo system, designed by Amazon, uses the most common techniques in the text book of distributed systems, such as optimistic replication, quorum systems, and so on. In Google file systems (GFS), we don’t see too many new ideas, either. The challenge they are facing is how to get it to work in a large-scale setting, and how to use the resources in a more efficient way. In summary, I think cloud computing is more about a “new” style of computing, instead of a new technology or methodology.

When Definitions Become Stifiling

Jose R. Rodriguez, CIO of San Jose, Calif.-based Metzli Information Technology, a consulting and implementation firm aligned with IBM Dynamic Infrastructure initiatives, suggests that cloud is a style, methodology, and blend of technologies at once, stating:

If we accept Irving Wladawsky-Berger’s insight of cloud computing as the evolution of Internet-based computing, it is clear that not a single but multiple technologies are at work facilitating network access to a pool of configurable computing resources (NIST). That hardware-decoupled, virtualized shared resource pool is highly-available, provisioned and released on demand (NIST) with a high degree of provider automation so as to minimize management overhead. Revision 15 of NIST lists not a single but three styles or models of delivering services via cloud computing. The first, software as a service (SaaS), provider applications are accessible from end user heterogeneous computing devices; second, platform as a service (PaaS) provides an homogeneous environment suitable for the end user deployed/managed applications; and third, infrastructure as a service (IaaS), suitable for end user arbitrary deployment and control of applications and platform, storage, processing.

It should be noted that in those styles or service delivery models aforementioned, the complexity of underlying cloud infrastructure is hidden from the end user. Hence, cloud computing is rather a methodology delineating an evolving computing paradigm having characteristics of high availability and broadband, elasticity, pooling of resources and a mechanism to measure the usage of those (NIST). Accordingly, although cloud computing may be logically categorized into private, public, community, and hybrid deployment models, Irving Wladawsky-Berger might describe the evolving paradigm as analogous to the industrialization of the delivery mechanism for cloud services: the datacenter.

As John Hurley, principle investigator and director for the National Nuclear Security Administration and DOE-sponsored Center for Disaster Recovery, notes in his discussion on the topic, “The advancements that have revealed themselves in hardware, software and networking now enable us to solve much different kinds of problems. Of even greater importance is the fact that the potential for solutions to very real, practical and large-scale problems has not only enabled us, but actually requires us to really start from the beginning, in terms of how we define the problems and the resources to address them.”

In short, the definitions we need to be most concerned with are those that direct end users forward and keep innovation thriving. While it can be dangerous to put forth “mixed” information about what cloud is (i.e., consistently calling it a “technology” as if it were rooted in one single innovation), if there is greater consensus on what it is, the overwhelming majority of writing on the topic can clarify cloud for end users by adhering to one definition — that cloud is a blend of technologies that allow for new styles and methodologies of computing for enterprise and HPC users.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

Empowering High-Performance Computing for Artificial Intelligence

April 19, 2024

Artificial intelligence (AI) presents some of the most challenging demands in information technology, especially concerning computing power and data movement. As a result of these challenges, high-performance computing Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that have occurred about once a decade. With this in mind, the ISC Read more…

2024 Winter Classic: Texas Two Step

April 18, 2024

Texas Tech University. Their middle name is ‘tech’, so it’s no surprise that they’ve been fielding not one, but two teams in the last three Winter Classic cluster competitions. Their teams, dubbed Matador and Red Read more…

2024 Winter Classic: The Return of Team Fayetteville

April 18, 2024

Hailing from Fayetteville, NC, Fayetteville State University stayed under the radar in their first Winter Classic competition in 2022. Solid students for sure, but not a lot of HPC experience. All good. They didn’t Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use of Rigetti’s Novera 9-qubit QPU. The approach by a quantum Read more…

2024 Winter Classic: Meet Team Morehouse

April 17, 2024

Morehouse College? The university is well-known for their long list of illustrious graduates, the rigor of their academics, and the quality of the instruction. They were one of the first schools to sign up for the Winter Read more…

Kathy Yelick on Post-Exascale Challenges

April 18, 2024

With the exascale era underway, the HPC community is already turning its attention to zettascale computing, the next of the 1,000-fold performance leaps that ha Read more…

Software Specialist Horizon Quantum to Build First-of-a-Kind Hardware Testbed

April 18, 2024

Horizon Quantum Computing, a Singapore-based quantum software start-up, announced today it would build its own testbed of quantum computers, starting with use o Read more…

MLCommons Launches New AI Safety Benchmark Initiative

April 16, 2024

MLCommons, organizer of the popular MLPerf benchmarking exercises (training and inference), is starting a new effort to benchmark AI Safety, one of the most pre Read more…

Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

April 15, 2024

As the AI revolution marches on, it is vital to continually reassess how this technology is reshaping our world. To that end, researchers at Stanford’s Instit Read more…

Intel’s Vision Advantage: Chips Are Available Off-the-Shelf

April 11, 2024

The chip market is facing a crisis: chip development is now concentrated in the hands of the few. A confluence of events this week reminded us how few chips Read more…

The VC View: Quantonation’s Deep Dive into Funding Quantum Start-ups

April 11, 2024

Yesterday Quantonation — which promotes itself as a one-of-a-kind venture capital (VC) company specializing in quantum science and deep physics  — announce Read more…

Nvidia’s GTC Is the New Intel IDF

April 9, 2024

After many years, Nvidia's GPU Technology Conference (GTC) was back in person and has become the conference for those who care about semiconductors and AI. I Read more…

Google Announces Homegrown ARM-based CPUs 

April 9, 2024

Google sprang a surprise at the ongoing Google Next Cloud conference by introducing its own ARM-based CPU called Axion, which will be offered to customers in it Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Leading Solution Providers

Contributors

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Eyes on the Quantum Prize – D-Wave Says its Time is Now

January 30, 2024

Early quantum computing pioneer D-Wave again asserted – that at least for D-Wave – the commercial quantum era has begun. Speaking at its first in-person Ana Read more…

GenAI Having Major Impact on Data Culture, Survey Says

February 21, 2024

While 2023 was the year of GenAI, the adoption rates for GenAI did not match expectations. Most organizations are continuing to invest in GenAI but are yet to Read more…

The GenAI Datacenter Squeeze Is Here

February 1, 2024

The immediate effect of the GenAI GPU Squeeze was to reduce availability, either direct purchase or cloud access, increase cost, and push demand through the roof. A secondary issue has been developing over the last several years. Even though your organization secured several racks... Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire