A Crisis of Definitions?

By Nicole Hemsoth

April 18, 2010

We have heard all about “cloud technology” in countless articles, but when we get right down to it, what we call “cloud technology” is actually a collection of technologies that are driven by methodologies and styles of adapting technologies to suit the mission-critical demands of enterprise HPC.

There is little consensus about what the exact nature of cloud computing is in enterprise HPC — at least in terms of what so many in the community are calling it. Some suggest it is a technology of its own while others state that cloud is merely a style of computing. Others, including Addison Snell of Intersect 360 Research, take the concept of “style” or “forms” of computing a bit further and call cloud a “methodology” of computing. While on the surface it may seem there is little difference between these terms, with growing adoption it is important that there is some consistency or consensus. To arrive at a sense of cloud as a technology, methodology of computing, or a “new” style of computing, the question was posed to a handful of members of the enterprise and HPC community.

Cloud as a Methodology of Computing

Wolfgang Gentzsch, Advisor to the EU project Distributed European Infrastructure for Supercomputing Applications (DEISA) and member of the Board of Directors of the Open Grid Forum, suggests that cloud computing is not a distinct technology, but rather is a combination of technologies that have evolved over the course of decades to create something far more associated with a methodology versus a style. Gentzsch states:

Cloud computing is many things to many people. If, however, you look closer at its evolution, from terminal-mainframe, to client-server, to client-grid, and finally to client-cloud, (perhaps to terminal-cloud, or PDA-cloud, next), it is the logical result of a 20-years effort to make computing more efficient and more user friendly — from self-plumbing to utility.

In my opinion, cloud comes closest to being a methodology, i.e., “a set of methods, practices, procedures, and rules” defined and applied by the IT community to provide user-friendly access to efficient computing, which are, high-level: computing as a utility; pay-per-use billing; access over the internet — anytime, anywhere; scaling resources; Opex instead of Capex; etc.

To a lesser extent, this has to do with a specific technology which you would call cloud technology; the technological bits and pieces you need to build and use a cloud have been developed before the term cloud computing was invented, and are thus independent of cloud. In fact, already in the 90s, the idea of ASP was purest SaaS, and almost all ingredients were already there: the Internet, secure portals, server farms, ASP-enabled applications, and software companies willing to implement. But all these components were still inefficient: server farms didn’t scale, bandwidth was low, portals were clumsy, and most importantly, users weren’t mentally ready for ASP.

Today, all the technology is on the table to build a most efficient, scalable, flexible, dynamic cloud. However, still, the most severe roadblocks to cloud adoption today (are the same as with ASPs and grids and) come from mental barriers and considerations like privacy, competitiveness, and intellectual property issues. (See a more complete listing of roadblocks in my most recent blog.)

So, in my opinion, cloud computing is a methodology for utility computing, enabled by different modern technologies, supporting a new style of computing, i.e., computing via the Internet.

Echoing this view of cloud as a methodology of computing versus a unique set of technologies (albeit using a different approach), Bruce Maches, former director of information technology for Pfizer’s R&D division and current CIO for BRMaches & Associates, stated:

There are arguments that can be made on both sides (yes or no) for all three of the possibilities. I would argue no, cloud is not a technology in and of itself. Cloud computing in the natural evolution for the use of the infrastructure built around and supporting the internet and the services it provides. There is no one single technology you can point to and say ‘that is cloud computing.’ Certainly there are many computing advances that enable the leveraging of hardware and software resources over the internet and allow companies to avoid building out their own expensive infrastructure. To try to lump them into one technology called cloud just doesn’t quite work.

Is cloud a style of computing? This is a harder one to define as a style can be a manner or technique. It would be difficult to come up with definitive arguments to say either yes or no. Is it a methodology? Is it a discipline on how computing resources, regardless of source, are appropriately and efficiently applied to solve problems? Are there underlying governance principles that can be used to determine if cloud computing is the right answer to meet a particular need?

I would make the argument that the application of cloud computing is the overall gestalt of using appropriate methodologies to determine when to apply the ‘style’ of cloud computing all of which is supported by the underlying computing and networking technologies.

Enterprise and HPC Cloud as a (Not So) New Style of Computing

Weisong Shi is an Associate Professor of Computer Science with Wayne State University, where he directs the Mobile and Internet Systems Laboratory (MIST) and follows research interests in computer systems and mobile and cloud computing, Shi, who co-authored this article, suggests that from the perspective of end users, cloud is a “new” style of computing, stating:

To discuss this, we need to take a look at the history of computing. I think there are three phases of computing in the last 60 years. In the first phase (1960-1980), also known the mainframe era, the common setting was a mainframe with tens of dummy terminals. If a user wanted to use a computer, he or she would have to go to a computer room and submit the job. The advantage of this style is that end users didn’t need to maintain the computer, e.g., installing software and upgrading drivers, and so on, but at the cost of flexibility. In the second phase (1980-2005), also known as the PC era, each user had his or her own computer — this is what PC stands for, personal computer. The biggest advantage of this computing style is the flexibility it brought to us. End users can do computing wherever they want, and don’t have to go to a dedicated computer room. We have witnessed the success of this model since the inception of personal computers. However, as the fast penetration of computers spreads, we envision that the computer is more and more like an appliance in our home, and end users want to treat a computer the same way they treat a TV or refrigerator. Apparently, the PC model does not work since it requires end users to install and maintain the computers by themselves, also the PC era is not very well designed for content sharing among multiple users, since the network is treated as the first-class entity at this phase.

The fast growth of Internet services, e.g., Google documents, Youtube, etc., together with the wide deployment of 3G/4G technologies, stimulate another wave of revolution for the way we use computers, i.e., cloud computing. I think we are entering the cloud computing era, where end users will enjoy the flexibility brought by mobile Internet devices (MID) and the ease of management/sharing of their content, i.e., email, documents, photos, videos, and so on, brought by cloud computing. With cloud computing, we will realize the vision of “Computing for the Masses” in the near future.

From the technology point of view, I don’t think cloud computing introduces too many new challenges and new ideas. What we need to do in these systems is use the existing techniques more efficiently. For example, the Dynamo system, designed by Amazon, uses the most common techniques in the text book of distributed systems, such as optimistic replication, quorum systems, and so on. In Google file systems (GFS), we don’t see too many new ideas, either. The challenge they are facing is how to get it to work in a large-scale setting, and how to use the resources in a more efficient way. In summary, I think cloud computing is more about a “new” style of computing, instead of a new technology or methodology.

When Definitions Become Stifiling

Jose R. Rodriguez, CIO of San Jose, Calif.-based Metzli Information Technology, a consulting and implementation firm aligned with IBM Dynamic Infrastructure initiatives, suggests that cloud is a style, methodology, and blend of technologies at once, stating:

If we accept Irving Wladawsky-Berger’s insight of cloud computing as the evolution of Internet-based computing, it is clear that not a single but multiple technologies are at work facilitating network access to a pool of configurable computing resources (NIST). That hardware-decoupled, virtualized shared resource pool is highly-available, provisioned and released on demand (NIST) with a high degree of provider automation so as to minimize management overhead. Revision 15 of NIST lists not a single but three styles or models of delivering services via cloud computing. The first, software as a service (SaaS), provider applications are accessible from end user heterogeneous computing devices; second, platform as a service (PaaS) provides an homogeneous environment suitable for the end user deployed/managed applications; and third, infrastructure as a service (IaaS), suitable for end user arbitrary deployment and control of applications and platform, storage, processing.

It should be noted that in those styles or service delivery models aforementioned, the complexity of underlying cloud infrastructure is hidden from the end user. Hence, cloud computing is rather a methodology delineating an evolving computing paradigm having characteristics of high availability and broadband, elasticity, pooling of resources and a mechanism to measure the usage of those (NIST). Accordingly, although cloud computing may be logically categorized into private, public, community, and hybrid deployment models, Irving Wladawsky-Berger might describe the evolving paradigm as analogous to the industrialization of the delivery mechanism for cloud services: the datacenter.

As John Hurley, principle investigator and director for the National Nuclear Security Administration and DOE-sponsored Center for Disaster Recovery, notes in his discussion on the topic, “The advancements that have revealed themselves in hardware, software and networking now enable us to solve much different kinds of problems. Of even greater importance is the fact that the potential for solutions to very real, practical and large-scale problems has not only enabled us, but actually requires us to really start from the beginning, in terms of how we define the problems and the resources to address them.”

In short, the definitions we need to be most concerned with are those that direct end users forward and keep innovation thriving. While it can be dangerous to put forth “mixed” information about what cloud is (i.e., consistently calling it a “technology” as if it were rooted in one single innovation), if there is greater consensus on what it is, the overwhelming majority of writing on the topic can clarify cloud for end users by adhering to one definition — that cloud is a blend of technologies that allow for new styles and methodologies of computing for enterprise and HPC users.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire