A Crisis of Definitions?

By Nicole Hemsoth

April 18, 2010

We have heard all about “cloud technology” in countless articles, but when we get right down to it, what we call “cloud technology” is actually a collection of technologies that are driven by methodologies and styles of adapting technologies to suit the mission-critical demands of enterprise HPC.

There is little consensus about what the exact nature of cloud computing is in enterprise HPC — at least in terms of what so many in the community are calling it. Some suggest it is a technology of its own while others state that cloud is merely a style of computing. Others, including Addison Snell of Intersect 360 Research, take the concept of “style” or “forms” of computing a bit further and call cloud a “methodology” of computing. While on the surface it may seem there is little difference between these terms, with growing adoption it is important that there is some consistency or consensus. To arrive at a sense of cloud as a technology, methodology of computing, or a “new” style of computing, the question was posed to a handful of members of the enterprise and HPC community.

Cloud as a Methodology of Computing

Wolfgang Gentzsch, Advisor to the EU project Distributed European Infrastructure for Supercomputing Applications (DEISA) and member of the Board of Directors of the Open Grid Forum, suggests that cloud computing is not a distinct technology, but rather is a combination of technologies that have evolved over the course of decades to create something far more associated with a methodology versus a style. Gentzsch states:

Cloud computing is many things to many people. If, however, you look closer at its evolution, from terminal-mainframe, to client-server, to client-grid, and finally to client-cloud, (perhaps to terminal-cloud, or PDA-cloud, next), it is the logical result of a 20-years effort to make computing more efficient and more user friendly — from self-plumbing to utility.

In my opinion, cloud comes closest to being a methodology, i.e., “a set of methods, practices, procedures, and rules” defined and applied by the IT community to provide user-friendly access to efficient computing, which are, high-level: computing as a utility; pay-per-use billing; access over the internet — anytime, anywhere; scaling resources; Opex instead of Capex; etc.

To a lesser extent, this has to do with a specific technology which you would call cloud technology; the technological bits and pieces you need to build and use a cloud have been developed before the term cloud computing was invented, and are thus independent of cloud. In fact, already in the 90s, the idea of ASP was purest SaaS, and almost all ingredients were already there: the Internet, secure portals, server farms, ASP-enabled applications, and software companies willing to implement. But all these components were still inefficient: server farms didn’t scale, bandwidth was low, portals were clumsy, and most importantly, users weren’t mentally ready for ASP.

Today, all the technology is on the table to build a most efficient, scalable, flexible, dynamic cloud. However, still, the most severe roadblocks to cloud adoption today (are the same as with ASPs and grids and) come from mental barriers and considerations like privacy, competitiveness, and intellectual property issues. (See a more complete listing of roadblocks in my most recent blog.)

So, in my opinion, cloud computing is a methodology for utility computing, enabled by different modern technologies, supporting a new style of computing, i.e., computing via the Internet.

Echoing this view of cloud as a methodology of computing versus a unique set of technologies (albeit using a different approach), Bruce Maches, former director of information technology for Pfizer’s R&D division and current CIO for BRMaches & Associates, stated:

There are arguments that can be made on both sides (yes or no) for all three of the possibilities. I would argue no, cloud is not a technology in and of itself. Cloud computing in the natural evolution for the use of the infrastructure built around and supporting the internet and the services it provides. There is no one single technology you can point to and say ‘that is cloud computing.’ Certainly there are many computing advances that enable the leveraging of hardware and software resources over the internet and allow companies to avoid building out their own expensive infrastructure. To try to lump them into one technology called cloud just doesn’t quite work.

Is cloud a style of computing? This is a harder one to define as a style can be a manner or technique. It would be difficult to come up with definitive arguments to say either yes or no. Is it a methodology? Is it a discipline on how computing resources, regardless of source, are appropriately and efficiently applied to solve problems? Are there underlying governance principles that can be used to determine if cloud computing is the right answer to meet a particular need?

I would make the argument that the application of cloud computing is the overall gestalt of using appropriate methodologies to determine when to apply the ‘style’ of cloud computing all of which is supported by the underlying computing and networking technologies.

Enterprise and HPC Cloud as a (Not So) New Style of Computing

Weisong Shi is an Associate Professor of Computer Science with Wayne State University, where he directs the Mobile and Internet Systems Laboratory (MIST) and follows research interests in computer systems and mobile and cloud computing, Shi, who co-authored this article, suggests that from the perspective of end users, cloud is a “new” style of computing, stating:

To discuss this, we need to take a look at the history of computing. I think there are three phases of computing in the last 60 years. In the first phase (1960-1980), also known the mainframe era, the common setting was a mainframe with tens of dummy terminals. If a user wanted to use a computer, he or she would have to go to a computer room and submit the job. The advantage of this style is that end users didn’t need to maintain the computer, e.g., installing software and upgrading drivers, and so on, but at the cost of flexibility. In the second phase (1980-2005), also known as the PC era, each user had his or her own computer — this is what PC stands for, personal computer. The biggest advantage of this computing style is the flexibility it brought to us. End users can do computing wherever they want, and don’t have to go to a dedicated computer room. We have witnessed the success of this model since the inception of personal computers. However, as the fast penetration of computers spreads, we envision that the computer is more and more like an appliance in our home, and end users want to treat a computer the same way they treat a TV or refrigerator. Apparently, the PC model does not work since it requires end users to install and maintain the computers by themselves, also the PC era is not very well designed for content sharing among multiple users, since the network is treated as the first-class entity at this phase.

The fast growth of Internet services, e.g., Google documents, Youtube, etc., together with the wide deployment of 3G/4G technologies, stimulate another wave of revolution for the way we use computers, i.e., cloud computing. I think we are entering the cloud computing era, where end users will enjoy the flexibility brought by mobile Internet devices (MID) and the ease of management/sharing of their content, i.e., email, documents, photos, videos, and so on, brought by cloud computing. With cloud computing, we will realize the vision of “Computing for the Masses” in the near future.

From the technology point of view, I don’t think cloud computing introduces too many new challenges and new ideas. What we need to do in these systems is use the existing techniques more efficiently. For example, the Dynamo system, designed by Amazon, uses the most common techniques in the text book of distributed systems, such as optimistic replication, quorum systems, and so on. In Google file systems (GFS), we don’t see too many new ideas, either. The challenge they are facing is how to get it to work in a large-scale setting, and how to use the resources in a more efficient way. In summary, I think cloud computing is more about a “new” style of computing, instead of a new technology or methodology.

When Definitions Become Stifiling

Jose R. Rodriguez, CIO of San Jose, Calif.-based Metzli Information Technology, a consulting and implementation firm aligned with IBM Dynamic Infrastructure initiatives, suggests that cloud is a style, methodology, and blend of technologies at once, stating:

If we accept Irving Wladawsky-Berger’s insight of cloud computing as the evolution of Internet-based computing, it is clear that not a single but multiple technologies are at work facilitating network access to a pool of configurable computing resources (NIST). That hardware-decoupled, virtualized shared resource pool is highly-available, provisioned and released on demand (NIST) with a high degree of provider automation so as to minimize management overhead. Revision 15 of NIST lists not a single but three styles or models of delivering services via cloud computing. The first, software as a service (SaaS), provider applications are accessible from end user heterogeneous computing devices; second, platform as a service (PaaS) provides an homogeneous environment suitable for the end user deployed/managed applications; and third, infrastructure as a service (IaaS), suitable for end user arbitrary deployment and control of applications and platform, storage, processing.

It should be noted that in those styles or service delivery models aforementioned, the complexity of underlying cloud infrastructure is hidden from the end user. Hence, cloud computing is rather a methodology delineating an evolving computing paradigm having characteristics of high availability and broadband, elasticity, pooling of resources and a mechanism to measure the usage of those (NIST). Accordingly, although cloud computing may be logically categorized into private, public, community, and hybrid deployment models, Irving Wladawsky-Berger might describe the evolving paradigm as analogous to the industrialization of the delivery mechanism for cloud services: the datacenter.

As John Hurley, principle investigator and director for the National Nuclear Security Administration and DOE-sponsored Center for Disaster Recovery, notes in his discussion on the topic, “The advancements that have revealed themselves in hardware, software and networking now enable us to solve much different kinds of problems. Of even greater importance is the fact that the potential for solutions to very real, practical and large-scale problems has not only enabled us, but actually requires us to really start from the beginning, in terms of how we define the problems and the resources to address them.”

In short, the definitions we need to be most concerned with are those that direct end users forward and keep innovation thriving. While it can be dangerous to put forth “mixed” information about what cloud is (i.e., consistently calling it a “technology” as if it were rooted in one single innovation), if there is greater consensus on what it is, the overwhelming majority of writing on the topic can clarify cloud for end users by adhering to one definition — that cloud is a blend of technologies that allow for new styles and methodologies of computing for enterprise and HPC users.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Mira Supercomputer Enables Cancer Research Breakthrough

November 11, 2019

Dynamic partial-wave spectroscopic (PWS) microscopy allows researchers to observe intracellular structures as small as 20 nanometers – smaller than those visible by optical microscopes – in three dimensions at a mill Read more…

By Staff report

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quantum annealing) – ion trap technology is edging into the QC Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. That’s the conclusion drawn by the scientists and researcher Read more…

By Jan Rowell

What’s New in HPC Research: Cosmic Magnetism, Cryptanalysis, Car Navigation & More

November 8, 2019

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Read more…

By Oliver Peckham

Machine Learning Fuels a Booming HPC Market

November 7, 2019

Enterprise infrastructure investments for training machine learning models have grown more than 50 percent annually over the past two years, and are expected to shortly surpass $10 billion, according to a new market fore Read more…

By George Leopold

AWS Solution Channel

Making High Performance Computing Affordable and Accessible for Small and Medium Businesses with HPC on AWS

High performance computing (HPC) brings a powerful set of tools to a broad range of industries, helping to drive innovation and boost revenue in finance, genomics, oil and gas extraction, and other fields. Read more…

IBM Accelerated Insights

Atom by Atom, Supercomputers Shed Light on Alloys

November 7, 2019

Alloys are at the heart of human civilization, but developing alloys in the Information Age is much different than it was in the Bronze Age. Trial-by-error smelting has given way to the use of high-performance computing Read more…

By Oliver Peckham

IBM Adds Support for Ion Trap Quantum Technology to Qiskit

November 11, 2019

After years of percolating in the shadow of quantum computing research based on superconducting semiconductors – think IBM, Rigetti, Google, and D-Wave (quant Read more…

By John Russell

Tackling HPC’s Memory and I/O Bottlenecks with On-Node, Non-Volatile RAM

November 8, 2019

On-node, non-volatile memory (NVRAM) is a game-changing technology that can remove many I/O and memory bottlenecks and provide a key enabler for exascale. Th Read more…

By Jan Rowell

MLPerf Releases First Inference Benchmark Results; Nvidia Touts its Showing

November 6, 2019

MLPerf.org, the young AI-benchmarking consortium, today issued the first round of results for its inference test suite. Among organizations with submissions wer Read more…

By John Russell

Azure Cloud First with AMD Epyc Rome Processors

November 6, 2019

At Ignite 2019 this week, Microsoft's Azure cloud team and AMD announced an expansion of their partnership that began in 2017 when Azure debuted Epyc-backed ins Read more…

By Tiffany Trader

Nvidia Launches Credit Card-Sized 21 TOPS Jetson System for Edge Devices

November 6, 2019

Nvidia has launched a new addition to its Jetson product line: a credit card-sized (70x45mm) form factor delivering up to 21 trillion operations/second (TOPS) o Read more…

By Doug Black

In Memoriam: Steve Tuecke, Globus Co-founder

November 4, 2019

HPCwire is deeply saddened to report that Steve Tuecke, longtime scientist at Argonne National Lab and University of Chicago, has passed away at age 52. Tuecke Read more…

By Tiffany Trader

Spending Spree: Hyperscalers Bought $57B of IT in 2018, $10B+ by Google – But Is Cloud on Horizon?

October 31, 2019

Hyperscalers are the masters of the IT universe, gravitational centers of increasing pull in the emerging age of data-driven compute and AI.  In the high-stake Read more…

By Doug Black

Cray Debuts ClusterStor E1000 Finishing Remake of Portfolio for ‘Exascale Era’

October 30, 2019

Cray, now owned by HPE, today introduced the ClusterStor E1000 storage platform, which leverages Cray software and mixes hard disk drives (HDD) and flash memory Read more…

By John Russell

Supercomputer-Powered AI Tackles a Key Fusion Energy Challenge

August 7, 2019

Fusion energy is the Holy Grail of the energy world: low-radioactivity, low-waste, zero-carbon, high-output nuclear power that can run on hydrogen or lithium. T Read more…

By Oliver Peckham

Using AI to Solve One of the Most Prevailing Problems in CFD

October 17, 2019

How can artificial intelligence (AI) and high-performance computing (HPC) solve mesh generation, one of the most commonly referenced problems in computational engineering? A new study has set out to answer this question and create an industry-first AI-mesh application... Read more…

By James Sharpe

Cray Wins NNSA-Livermore ‘El Capitan’ Exascale Contract

August 13, 2019

Cray has won the bid to build the first exascale supercomputer for the National Nuclear Security Administration (NNSA) and Lawrence Livermore National Laborator Read more…

By Tiffany Trader

DARPA Looks to Propel Parallelism

September 4, 2019

As Moore’s law runs out of steam, new programming approaches are being pursued with the goal of greater hardware performance with less coding. The Defense Advanced Projects Research Agency is launching a new programming effort aimed at leveraging the benefits of massive distributed parallelism with less sweat. Read more…

By George Leopold

AMD Launches Epyc Rome, First 7nm CPU

August 8, 2019

From a gala event at the Palace of Fine Arts in San Francisco yesterday (Aug. 7), AMD launched its second-generation Epyc Rome x86 chips, based on its 7nm proce Read more…

By Tiffany Trader

D-Wave’s Path to 5000 Qubits; Google’s Quantum Supremacy Claim

September 24, 2019

On the heels of IBM’s quantum news last week come two more quantum items. D-Wave Systems today announced the name of its forthcoming 5000-qubit system, Advantage (yes the name choice isn’t serendipity), at its user conference being held this week in Newport, RI. Read more…

By John Russell

Ayar Labs to Demo Photonics Chiplet in FPGA Package at Hot Chips

August 19, 2019

Silicon startup Ayar Labs continues to gain momentum with its DARPA-backed optical chiplet technology that puts advanced electronics and optics on the same chip Read more…

By Tiffany Trader

Crystal Ball Gazing: IBM’s Vision for the Future of Computing

October 14, 2019

Dario Gil, IBM’s relatively new director of research, painted a intriguing portrait of the future of computing along with a rough idea of how IBM thinks we’ Read more…

By John Russell

Leading Solution Providers

ISC 2019 Virtual Booth Video Tour

CRAY
CRAY
DDN
DDN
DELL EMC
DELL EMC
GOOGLE
GOOGLE
ONE STOP SYSTEMS
ONE STOP SYSTEMS
PANASAS
PANASAS
VERNE GLOBAL
VERNE GLOBAL

Intel Confirms Retreat on Omni-Path

August 1, 2019

Intel Corp.’s plans to make a big splash in the network fabric market for linking HPC and other workloads has apparently belly-flopped. The chipmaker confirmed to us the outlines of an earlier report by the website CRN that it has jettisoned plans for a second-generation version of its Omni-Path interconnect... Read more…

By Staff report

Kubernetes, Containers and HPC

September 19, 2019

Software containers and Kubernetes are important tools for building, deploying, running and managing modern enterprise applications at scale and delivering enterprise software faster and more reliably to the end user — while using resources more efficiently and reducing costs. Read more…

By Daniel Gruber, Burak Yenier and Wolfgang Gentzsch, UberCloud

Dell Ramps Up HPC Testing of AMD Rome Processors

October 21, 2019

Dell Technologies is wading deeper into the AMD-based systems market with a growing evaluation program for the latest Epyc (Rome) microprocessors from AMD. In a Read more…

By John Russell

Intel Debuts Pohoiki Beach, Its 8M Neuron Neuromorphic Development System

July 17, 2019

Neuromorphic computing has received less fanfare of late than quantum computing whose mystery has captured public attention and which seems to have generated mo Read more…

By John Russell

Rise of NIH’s Biowulf Mirrors the Rise of Computational Biology

July 29, 2019

The story of NIH’s supercomputer Biowulf is fascinating, important, and in many ways representative of the transformation of life sciences and biomedical res Read more…

By John Russell

Xilinx vs. Intel: FPGA Market Leaders Launch Server Accelerator Cards

August 6, 2019

The two FPGA market leaders, Intel and Xilinx, both announced new accelerator cards this week designed to handle specialized, compute-intensive workloads and un Read more…

By Doug Black

With the Help of HPC, Astronomers Prepare to Deflect a Real Asteroid

September 26, 2019

For years, NASA has been running simulations of asteroid impacts to understand the risks (and likelihoods) of asteroids colliding with Earth. Now, NASA and the European Space Agency (ESA) are preparing for the next, crucial step in planetary defense against asteroid impacts: physically deflecting a real asteroid. Read more…

By Oliver Peckham

When Dense Matrix Representations Beat Sparse

September 9, 2019

In our world filled with unintended consequences, it turns out that saving memory space to help deal with GPU limitations, knowing it introduces performance pen Read more…

By James Reinders

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This