Cloud Computing Opportunities in HPC

By Christopher G. Willard, Ph.D., Addison Snell, Laura Segervall

November 2, 2009

This article is excerpted from “Cloud Opportunities in HPC: Market Taxonomy,” published by InterSect360 Research. The full article was distributed to subscribers of the InterSect360 market advisory service and can also be obtained by contacting [email protected].

In Life, the Universe, and Everything, the third book of Douglas Adams’ whimsical Hitchhiker fantasy trilogy, cosmic wayfarer Ford Prefect describes how an object, even a large object, could effectively be rendered invisible to the general populace by surrounding it with an “SEP field” that causes would-be observers to avoid recognizing Somebody Else’s Problem. “An SEP,” Ford helpfully explains, “is something we can’t see, or don’t see, or our brain doesn’t let us see, because we think that it’s somebody else’s problem.”

If we were to reinterpret SEP to stand for “Somebody Else’s Processing,” we would be well on the way to a definition of cloud computing.

The term “cloud” comes from the engineering practice of drawing a cloud in a schematic to represent an external resource that the engineer’s design will interact with — a part of the workflow that he or she will assume is working but that is not part of that specific design. For example, a processor designer might draw a cloud to represent a memory system, with arrows indicating the flow of data in and out of the memory cloud. Cloud computing takes this concept to an organizational level; entire sections of IT workflows can now be virtualized into resources that are someone else’s concern.

Cloud computing is therefore a new instantiation of distributed computing. It is built on grid computing concepts and technology and further enabled by Internet technologies for access. Cloud computing is the delivery of some part of an IT workflow — such as computational cycles, data storage, or application hosting — using an Internet-style interface. This definition includes Web-immersed intranets as conduits for accessing private clouds.

Cloud computing is currently driven by business models that attempt to utilize or monetize unused resources. Grid, virtualization, and now cloud technologies have attempted to find and tap idle resources, thus reducing costs or generating revenue. The most interesting difference between cloud computing and earlier forms of distributed computing is that in developing ultra-scale computing centers, organizations such as Google and Amazon incidentally built out significant caches of occasionally idle computing resources that could be made generally available through the Internet. Furthermore these organizations found that they had developed significant skills in constructing and managing these resources, and economies of scale allowed them to purchase incremental equipment at relatively lower prices. The cloud was born as an effort to monetize those skills, economic advantages, and excess capacity.

This is important because from a business model point of view the cloud resources came into existence at no cost, with minimal incremental support requirements. The majority of the costs are born by the core businesses, and therefore, at least initially, customers of the excess capacity do not need to foot the bill for capital expenditures. Costs associated with staff training, facilities, and development are similarly already fully amortized and absorbed by the parent businesses. There is little more appealing than being able to sell something that you get for free.

With such an appealing proposition in play, many other organizations are scrambling to see whether they have an infrastructure — public or private — that can be exploited for gain through cloud computing. However, when significant excess capacity does not exist, or if it cannot be leveraged in a timely or reliable fashion, it is not clear what sustainable business models exist for cloud computing.

High-end, public cloud computing offerings represent a convergence of grid and Internet technologies, potentially enabling workable new business models. Smaller, private clouds are a technical evolution that expands the ease of use and deployment of grids in more organizations.

As cloud computing technologies mature, InterSect360 Research sees several possible business models that could evolve. Although we emphasize High Performance Computing in our analysis, cloud computing transcends HPC, and similar models will exist in non-HPC markets.

Utility Computing Models

Cloud computing provides a methodology for extending utility computing access models. Utility computing is not new; it has been touted for several years as a way for users to manage peaks in demand, extend capabilities, or reduce costs. Traditionally, limitations in network bandwidth, security issues, software licensing models, and repeatability of results have acted as barriers to adoption, and all of these still need to be addressed with cloud.

There are four major variations on the potential utility computing models with cloud:

Cycles On Demand

The cycles-on-demand model is the most basic approach to cloud computing. The cloud supplier provides hardware and basic software environments, and the user provides application software, application data, and any additional middleware required. In this case users are simply buying access to computer processors, which they provision and manage as needed in order to run their applications, after which the resources are “returned” to the cloud provider. Users are charged for the time the resources are in use, plus possibly some overhead costs. The demands are relatively low on the cloud provider, and relatively high on the user in terms of making sure there is effective utility generated by the rented resources.

Storage Clouds

The storage cloud model complements the cycles-on-demand model both in terms of operational approach — users buy disk space at a cloud providers facility — and in terms of providing a more complete solution for cycles users — a place to put programs and data between job runs. In the storage-on-demand approach the cloud is used:

  • As the final (archival) stage in hierarchical storage management schemes (even if it is a two-level hierarchy: local disk and cloud). On the consumer side this is essentially the concept used for PC backup services.
     
  • A file-sharing buffer where users can place data that can be accessed at a later time by other users. This approach is at the heart of photo sharing sites, and arguably with social sites such as Facebook and LinkedIn. This same concept is also used for shared science databases in areas such as genomics and chemistry.

Software as a Service

Software as a service (SaaS) extends the basic cycles-on-demand model by providing application software within the cloud. This model addresses software licensing issues by bundling the software costs within the cloud processing costs. It also addresses software certification and results repeatability issues because the cloud provider controls both the hardware and software environment and can provide specific system images to users.

SaaS also has the advantages for providers of allowing them to sell services along with the software, and to use the cloud as demonstration platform for direct sales of software products. In addition, the user is able to turn much of the system administration task over to the provider. The major drawback to this strategy is that users generally run of a series of software packages as part of their overall R&D workflow, in such case data would need to be moved into and out of the cloud for specific stages of the workflow, or the cloud provider must support an end-to-end process.

Environment Hosting

Environmental hosting is the use of a service to support virtually all computational tasks, with servers, storage, and software all being maintained by a third party. This concept can include constructs such as platform as a service (PaaS) and infrastructure as a service (IaaS). Arguably environmental hosting in the cloud is an oxymoron, however, it represents the upper end of the utility computing spectrum and a logical destination of cloud strategies. This approach addresses software, result repeatability, and most networking issues by simply providing dedicated resources all in one (logical) place. It addresses many of the technical security issues, but not a consumer organization’s security problem of inserting a third party into the workflow process.

Cloud-Generated Markets

In addition to the models for those who would consume resources through the cloud, there are applications that are made possible by the combination of Internet communications and large computing resources. This is inclusive of the opportunities for organizations to become cloud computing service providers, either externally or internally. In addition, there is the potential for some secondary markets to be enabled by the adoption of cloud technologies.

Restructuring of Internet-Based Service Infrastructures

One of the most interesting aspects of cloud computing is that Internet companies with value-add and expertise in intellectual property or content (as opposed to purchasing, managing, and running computer hardware systems) could move their internal computing architecture to the cloud, while maintaining system management and operating control in-house. With this strategy an organization would move the bulk of its computing to the cloud keeping only what is necessary for communications and cloud management, in doing so they convert internal costs for systems, software, staff, space and power into usage fees in the cloud. Cloud technology and service providers facilitate and accelerate the industry’s evolution towards a network of interrelated specialty companies, as opposed to groups of organizations each performing the same set of infrastructure functions in house. The major issue potentially holding this model back would be cost; i.e., the level of premium users would be willing to pay for a service versus a do-it-yourself solution.

Personal Clouds

This strategy would replace personal computers with an advanced terminal that connected to a cloud utility that holds all of the user’s data and software. The advantage for users is that they would be relieved of the burden of purchasing, maintaining, and upgrading their personal systems. They would also have professional support for such task as system back-up and system security and would also be able to access their computing environment form any Web-connected device.

This strategy may represent the evolutionary future of the Internet, particularly as more devices become Web-enabled and the relationship between the Web and the personal computer is weakened by competing devices, such as smart phones. The main challenge to this model is overall bandwidth on the Internet. Side effects to such an evolution would replace the role of the operating system with a Web browser and whatever backend environment the cloud supplier chose to provide, also creating a new product class for Web terminals.

InterSect360 Research Analysis

We see cloud computing as part of the logical progression in distributed computing. It is not completely revolutionary, nor is it a panacea that will provide any service that can be imagined. The business models must be considered in terms of cost and control, barriers and benefits.

Of all the cloud business models, InterSect360 Research believes that SaaS has the highest potential for success within HPC. It addresses several of the major dampening factors associated with cloud and provides additional revenue opportunities in the services arena. It also targets industrial users, who would be the most likely to pay a premium for the product, without attempting to develop competing solutions. Furthermore companies can adopting SaaS models in cloud in a phased or tiered way, first proving the concept private clouds before giving themselves over to public or hybrid models. (This same phenomenon persists with private and public grids today.)

Organizations that have experience with the software and in house operations may look to SaaS options for peak load management and capacity extension. However, we believe the greater opportunity is for selling packaged cloud computing, software, and start-up services to companies testing HPC solutions. Our research indicates that there are major start-up barriers to using HPC solutions among small and medium companies. These barriers include finding the expertise for the creation of the organization’s first scalable digital models.

The major barrier for SaaS adoption in HPC is the fragmentation of the applications software sector of the industry. The boutique nature of the opportunity may indicate there is not sufficient volume to merit the ISV’s investment to create and market cloud-enable versions of their applications. Interestingly, in a recursive manner, small SaaS providers could theoretically tap into larger cycles-on-demand cloud providers to supply the computing resources.

Similarly, implementation of environment hosting within current cloud environments for HPC organizations would currently entail significant amounts of effort by the user organization to set up and manage storage and software environments. It would also be limited by software licensing issues for industrial users in particular. Thus market opportunities for this option are very limited at this time. That said, a small organization could conceivably do all its computing in the cloud, keeping all its data on cloud storage system, using only internally developed, open-source, or SaaS software, and trusting in small size as part of a herd to provide security.

Finally, we note that Web-based software services are not new to the market; they currently range from income tax preparation services to on-line gaming companies. SaaS fits into cloud markets based on the concept of work being sent to outside party and results returned, without the sender having knowledge of exactly how those results are generated. For some users, SaaS may inherently make sense. Ultimately the best way to help users adopt HPC applications may be to make them Somebody Else’s Problem.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

What’s New in HPC Research: September (Part 1)

September 18, 2018

In this new bimonthly feature, HPCwire will highlight newly published research in the high-performance computing community and related domains. From exascale to quantum computing, the details are here. Check back every Read more…

By Oliver Peckham

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and development. Among other things it would establish a National Quantu Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU--and a refresh of its inference server software packaged as Read more…

By George Leopold

HPE Extreme Performance Solutions

Introducing the First Integrated System Management Software for HPC Clusters from HPE

How do you manage your complex, growing cluster environments? Answer that big challenge with the new HPC cluster management solution: HPE Performance Cluster Manager. Read more…

IBM Accelerated Insights

A Crystal Ball for HPC

People are notoriously bad at predicting the future.  This very much includes experts. In the Forbes article “Why Most Predictions Are So Bad” Philip Tetlock discusses the largest and best-known test of the accuracy of expert predictions which show that any experts would do better if they make random guesses. Read more…

NSF Highlights Expanded Efforts for Broadening Participation in Computing

September 13, 2018

Today, the Directorate of Computer and Information Science and Engineering (CISE) of the NSF released a letter highlighting the expansion of its broadening participation in computing efforts. The letter was penned by Jam Read more…

By Staff

House Passes $1.275B National Quantum Initiative

September 17, 2018

Last Thursday the U.S. House of Representatives passed the National Quantum Initiative Act (NQIA) intended to accelerate quantum computing research and developm Read more…

By John Russell

Nvidia Accelerates AI Inference in the Datacenter with T4 GPU

September 14, 2018

Nvidia is upping its game for AI inference in the datacenter with a new platform consisting of an inference accelerator chip--the new Turing-based Tesla T4 GPU- Read more…

By George Leopold

DeepSense Combines HPC and AI to Bolster Canada’s Ocean Economy

September 13, 2018

We often hear scientists say that we know less than 10 percent of the life of the oceans. This week, IBM and a group of Canadian industry and government partner Read more…

By Tiffany Trader

Rigetti (and Others) Pursuit of Quantum Advantage

September 11, 2018

Remember ‘quantum supremacy’, the much-touted but little-loved idea that the age of quantum computing would be signaled when quantum computers could tackle Read more…

By John Russell

How FPGAs Accelerate Financial Services Workloads

September 11, 2018

While FSI companies are unlikely, for competitive reasons, to disclose their FPGA strategies, James Reinders offers insights into the case for FPGAs as accelerators for FSI by discussing performance, power, size, latency, jitter and inline processing. Read more…

By James Reinders

Update from Gregory Kurtzer on Singularity’s Push into FS and the Enterprise

September 11, 2018

Container technology is hardly new but it has undergone rapid evolution in the HPC space in recent years to accommodate traditional science workloads and HPC systems requirements. While Docker containers continue to dominate in the enterprise, other variants are becoming important and one alternative with distinctly HPC roots – Singularity – is making an enterprise push targeting advanced scale workload inclusive of HPC. Read more…

By John Russell

At HPC on Wall Street: AI-as-a-Service Accelerates AI Journeys

September 10, 2018

AIaaS – artificial intelligence-as-a-service – is the technology discipline that eases enterprise entry into the mysteries of the AI journey while lowering Read more…

By Doug Black

No Go for GloFo at 7nm; and the Fujitsu A64FX post-K CPU

September 5, 2018

It’s been a news worthy couple of weeks in the semiconductor and HPC industry. There were several HPC relevant disclosures at Hot Chips 2018 to whet appetites Read more…

By Dairsie Latimer

TACC Wins Next NSF-funded Major Supercomputer

July 30, 2018

The Texas Advanced Computing Center (TACC) has won the next NSF-funded big supercomputer beating out rivals including the National Center for Supercomputing Ap Read more…

By John Russell

IBM at Hot Chips: What’s Next for Power

August 23, 2018

With processor, memory and networking technologies all racing to fill in for an ailing Moore’s law, the era of the heterogeneous datacenter is well underway, Read more…

By Tiffany Trader

Requiem for a Phi: Knights Landing Discontinued

July 25, 2018

On Monday, Intel made public its end of life strategy for the Knights Landing "KNL" Phi product set. The announcement makes official what has already been wide Read more…

By Tiffany Trader

CERN Project Sees Orders-of-Magnitude Speedup with AI Approach

August 14, 2018

An award-winning effort at CERN has demonstrated potential to significantly change how the physics based modeling and simulation communities view machine learni Read more…

By Rob Farber

ORNL Summit Supercomputer Is Officially Here

June 8, 2018

Oak Ridge National Laboratory (ORNL) together with IBM and Nvidia celebrated the official unveiling of the Department of Energy (DOE) Summit supercomputer toda Read more…

By Tiffany Trader

New Deep Learning Algorithm Solves Rubik’s Cube

July 25, 2018

Solving (and attempting to solve) Rubik’s Cube has delighted millions of puzzle lovers since 1974 when the cube was invented by Hungarian sculptor and archite Read more…

By John Russell

AMD’s EPYC Road to Redemption in Six Slides

June 21, 2018

A year ago AMD returned to the server market with its EPYC processor line. The earth didn’t tremble but folks took notice. People remember the Opteron fondly Read more…

By John Russell

MLPerf – Will New Machine Learning Benchmark Help Propel AI Forward?

May 2, 2018

Let the AI benchmarking wars begin. Today, a diverse group from academia and industry – Google, Baidu, Intel, AMD, Harvard, and Stanford among them – releas Read more…

By John Russell

Leading Solution Providers

SC17 Booth Video Tours Playlist

Altair @ SC17

Altair

AMD @ SC17

AMD

ASRock Rack @ SC17

ASRock Rack

CEJN @ SC17

CEJN

DDN Storage @ SC17

DDN Storage

Huawei @ SC17

Huawei

IBM @ SC17

IBM

IBM Power Systems @ SC17

IBM Power Systems

Intel @ SC17

Intel

Lenovo @ SC17

Lenovo

Mellanox Technologies @ SC17

Mellanox Technologies

Microsoft @ SC17

Microsoft

Penguin Computing @ SC17

Penguin Computing

Pure Storage @ SC17

Pure Storage

Supericro @ SC17

Supericro

Tyan @ SC17

Tyan

Univa @ SC17

Univa

Pattern Computer – Startup Claims Breakthrough in ‘Pattern Discovery’ Technology

May 23, 2018

If it weren’t for the heavy-hitter technology team behind start-up Pattern Computer, which emerged from stealth today in a live-streamed event from San Franci Read more…

By John Russell

Sandia to Take Delivery of World’s Largest Arm System

June 18, 2018

While the enterprise remains circumspect on prospects for Arm servers in the datacenter, the leadership HPC community is taking a bolder, brighter view of the x86 server CPU alternative. Amongst current and planned Arm HPC installations – i.e., the innovative Mont-Blanc project, led by Bull/Atos, the 'Isambard’ Cray XC50 going into the University of Bristol, and commitments from both Japan and France among others -- HPE is announcing that it will be supply the United States National Nuclear Security Administration (NNSA) with a 2.3 petaflops peak Arm-based system, named Astra. Read more…

By Tiffany Trader

D-Wave Breaks New Ground in Quantum Simulation

July 16, 2018

Last Friday D-Wave scientists and colleagues published work in Science which they say represents the first fulfillment of Richard Feynman’s 1982 notion that Read more…

By John Russell

Intel Pledges First Commercial Nervana Product ‘Spring Crest’ in 2019

May 24, 2018

At its AI developer conference in San Francisco yesterday, Intel embraced a holistic approach to AI and showed off a broad AI portfolio that includes Xeon processors, Movidius technologies, FPGAs and Intel’s Nervana Neural Network Processors (NNPs), based on the technology it acquired in 2016. Read more…

By Tiffany Trader

Intel Announces Cooper Lake, Advances AI Strategy

August 9, 2018

Intel's chief datacenter exec Navin Shenoy kicked off the company's Data-Centric Innovation Summit Wednesday, the day-long program devoted to Intel's datacenter Read more…

By Tiffany Trader

TACC’s ‘Frontera’ Supercomputer Expands Horizon for Extreme-Scale Science

August 29, 2018

The National Science Foundation and the Texas Advanced Computing Center announced today that a new system, called Frontera, will overtake Stampede 2 as the fast Read more…

By Tiffany Trader

GPUs Power Five of World’s Top Seven Supercomputers

June 25, 2018

The top 10 echelon of the newly minted Top500 list boasts three powerful new systems with one common engine: the Nvidia Volta V100 general-purpose graphics proc Read more…

By Tiffany Trader

The Machine Learning Hype Cycle and HPC

June 14, 2018

Like many other HPC professionals I’m following the hype cycle around machine learning/deep learning with interest. I subscribe to the view that we’re probably approaching the ‘peak of inflated expectation’ but not quite yet starting the descent into the ‘trough of disillusionment. This still raises the probability that... Read more…

By Dairsie Latimer

  • arrow
  • Click Here for More Headlines
  • arrow
Do NOT follow this link or you will be banned from the site!
Share This