Cloud Computing Opportunities in HPC

By Christopher G. Willard, Ph.D., Addison Snell, Laura Segervall

November 2, 2009

This article is excerpted from “Cloud Opportunities in HPC: Market Taxonomy,” published by InterSect360 Research. The full article was distributed to subscribers of the InterSect360 market advisory service and can also be obtained by contacting [email protected].

In Life, the Universe, and Everything, the third book of Douglas Adams’ whimsical Hitchhiker fantasy trilogy, cosmic wayfarer Ford Prefect describes how an object, even a large object, could effectively be rendered invisible to the general populace by surrounding it with an “SEP field” that causes would-be observers to avoid recognizing Somebody Else’s Problem. “An SEP,” Ford helpfully explains, “is something we can’t see, or don’t see, or our brain doesn’t let us see, because we think that it’s somebody else’s problem.”

If we were to reinterpret SEP to stand for “Somebody Else’s Processing,” we would be well on the way to a definition of cloud computing.

The term “cloud” comes from the engineering practice of drawing a cloud in a schematic to represent an external resource that the engineer’s design will interact with — a part of the workflow that he or she will assume is working but that is not part of that specific design. For example, a processor designer might draw a cloud to represent a memory system, with arrows indicating the flow of data in and out of the memory cloud. Cloud computing takes this concept to an organizational level; entire sections of IT workflows can now be virtualized into resources that are someone else’s concern.

Cloud computing is therefore a new instantiation of distributed computing. It is built on grid computing concepts and technology and further enabled by Internet technologies for access. Cloud computing is the delivery of some part of an IT workflow — such as computational cycles, data storage, or application hosting — using an Internet-style interface. This definition includes Web-immersed intranets as conduits for accessing private clouds.

Cloud computing is currently driven by business models that attempt to utilize or monetize unused resources. Grid, virtualization, and now cloud technologies have attempted to find and tap idle resources, thus reducing costs or generating revenue. The most interesting difference between cloud computing and earlier forms of distributed computing is that in developing ultra-scale computing centers, organizations such as Google and Amazon incidentally built out significant caches of occasionally idle computing resources that could be made generally available through the Internet. Furthermore these organizations found that they had developed significant skills in constructing and managing these resources, and economies of scale allowed them to purchase incremental equipment at relatively lower prices. The cloud was born as an effort to monetize those skills, economic advantages, and excess capacity.

This is important because from a business model point of view the cloud resources came into existence at no cost, with minimal incremental support requirements. The majority of the costs are born by the core businesses, and therefore, at least initially, customers of the excess capacity do not need to foot the bill for capital expenditures. Costs associated with staff training, facilities, and development are similarly already fully amortized and absorbed by the parent businesses. There is little more appealing than being able to sell something that you get for free.

With such an appealing proposition in play, many other organizations are scrambling to see whether they have an infrastructure — public or private — that can be exploited for gain through cloud computing. However, when significant excess capacity does not exist, or if it cannot be leveraged in a timely or reliable fashion, it is not clear what sustainable business models exist for cloud computing.

High-end, public cloud computing offerings represent a convergence of grid and Internet technologies, potentially enabling workable new business models. Smaller, private clouds are a technical evolution that expands the ease of use and deployment of grids in more organizations.

As cloud computing technologies mature, InterSect360 Research sees several possible business models that could evolve. Although we emphasize High Performance Computing in our analysis, cloud computing transcends HPC, and similar models will exist in non-HPC markets.

Utility Computing Models

Cloud computing provides a methodology for extending utility computing access models. Utility computing is not new; it has been touted for several years as a way for users to manage peaks in demand, extend capabilities, or reduce costs. Traditionally, limitations in network bandwidth, security issues, software licensing models, and repeatability of results have acted as barriers to adoption, and all of these still need to be addressed with cloud.

There are four major variations on the potential utility computing models with cloud:

Cycles On Demand

The cycles-on-demand model is the most basic approach to cloud computing. The cloud supplier provides hardware and basic software environments, and the user provides application software, application data, and any additional middleware required. In this case users are simply buying access to computer processors, which they provision and manage as needed in order to run their applications, after which the resources are “returned” to the cloud provider. Users are charged for the time the resources are in use, plus possibly some overhead costs. The demands are relatively low on the cloud provider, and relatively high on the user in terms of making sure there is effective utility generated by the rented resources.

Storage Clouds

The storage cloud model complements the cycles-on-demand model both in terms of operational approach — users buy disk space at a cloud providers facility — and in terms of providing a more complete solution for cycles users — a place to put programs and data between job runs. In the storage-on-demand approach the cloud is used:

  • As the final (archival) stage in hierarchical storage management schemes (even if it is a two-level hierarchy: local disk and cloud). On the consumer side this is essentially the concept used for PC backup services.
     
  • A file-sharing buffer where users can place data that can be accessed at a later time by other users. This approach is at the heart of photo sharing sites, and arguably with social sites such as Facebook and LinkedIn. This same concept is also used for shared science databases in areas such as genomics and chemistry.

Software as a Service

Software as a service (SaaS) extends the basic cycles-on-demand model by providing application software within the cloud. This model addresses software licensing issues by bundling the software costs within the cloud processing costs. It also addresses software certification and results repeatability issues because the cloud provider controls both the hardware and software environment and can provide specific system images to users.

SaaS also has the advantages for providers of allowing them to sell services along with the software, and to use the cloud as demonstration platform for direct sales of software products. In addition, the user is able to turn much of the system administration task over to the provider. The major drawback to this strategy is that users generally run of a series of software packages as part of their overall R&D workflow, in such case data would need to be moved into and out of the cloud for specific stages of the workflow, or the cloud provider must support an end-to-end process.

Environment Hosting

Environmental hosting is the use of a service to support virtually all computational tasks, with servers, storage, and software all being maintained by a third party. This concept can include constructs such as platform as a service (PaaS) and infrastructure as a service (IaaS). Arguably environmental hosting in the cloud is an oxymoron, however, it represents the upper end of the utility computing spectrum and a logical destination of cloud strategies. This approach addresses software, result repeatability, and most networking issues by simply providing dedicated resources all in one (logical) place. It addresses many of the technical security issues, but not a consumer organization’s security problem of inserting a third party into the workflow process.

Cloud-Generated Markets

In addition to the models for those who would consume resources through the cloud, there are applications that are made possible by the combination of Internet communications and large computing resources. This is inclusive of the opportunities for organizations to become cloud computing service providers, either externally or internally. In addition, there is the potential for some secondary markets to be enabled by the adoption of cloud technologies.

Restructuring of Internet-Based Service Infrastructures

One of the most interesting aspects of cloud computing is that Internet companies with value-add and expertise in intellectual property or content (as opposed to purchasing, managing, and running computer hardware systems) could move their internal computing architecture to the cloud, while maintaining system management and operating control in-house. With this strategy an organization would move the bulk of its computing to the cloud keeping only what is necessary for communications and cloud management, in doing so they convert internal costs for systems, software, staff, space and power into usage fees in the cloud. Cloud technology and service providers facilitate and accelerate the industry’s evolution towards a network of interrelated specialty companies, as opposed to groups of organizations each performing the same set of infrastructure functions in house. The major issue potentially holding this model back would be cost; i.e., the level of premium users would be willing to pay for a service versus a do-it-yourself solution.

Personal Clouds

This strategy would replace personal computers with an advanced terminal that connected to a cloud utility that holds all of the user’s data and software. The advantage for users is that they would be relieved of the burden of purchasing, maintaining, and upgrading their personal systems. They would also have professional support for such task as system back-up and system security and would also be able to access their computing environment form any Web-connected device.

This strategy may represent the evolutionary future of the Internet, particularly as more devices become Web-enabled and the relationship between the Web and the personal computer is weakened by competing devices, such as smart phones. The main challenge to this model is overall bandwidth on the Internet. Side effects to such an evolution would replace the role of the operating system with a Web browser and whatever backend environment the cloud supplier chose to provide, also creating a new product class for Web terminals.

InterSect360 Research Analysis

We see cloud computing as part of the logical progression in distributed computing. It is not completely revolutionary, nor is it a panacea that will provide any service that can be imagined. The business models must be considered in terms of cost and control, barriers and benefits.

Of all the cloud business models, InterSect360 Research believes that SaaS has the highest potential for success within HPC. It addresses several of the major dampening factors associated with cloud and provides additional revenue opportunities in the services arena. It also targets industrial users, who would be the most likely to pay a premium for the product, without attempting to develop competing solutions. Furthermore companies can adopting SaaS models in cloud in a phased or tiered way, first proving the concept private clouds before giving themselves over to public or hybrid models. (This same phenomenon persists with private and public grids today.)

Organizations that have experience with the software and in house operations may look to SaaS options for peak load management and capacity extension. However, we believe the greater opportunity is for selling packaged cloud computing, software, and start-up services to companies testing HPC solutions. Our research indicates that there are major start-up barriers to using HPC solutions among small and medium companies. These barriers include finding the expertise for the creation of the organization’s first scalable digital models.

The major barrier for SaaS adoption in HPC is the fragmentation of the applications software sector of the industry. The boutique nature of the opportunity may indicate there is not sufficient volume to merit the ISV’s investment to create and market cloud-enable versions of their applications. Interestingly, in a recursive manner, small SaaS providers could theoretically tap into larger cycles-on-demand cloud providers to supply the computing resources.

Similarly, implementation of environment hosting within current cloud environments for HPC organizations would currently entail significant amounts of effort by the user organization to set up and manage storage and software environments. It would also be limited by software licensing issues for industrial users in particular. Thus market opportunities for this option are very limited at this time. That said, a small organization could conceivably do all its computing in the cloud, keeping all its data on cloud storage system, using only internally developed, open-source, or SaaS software, and trusting in small size as part of a herd to provide security.

Finally, we note that Web-based software services are not new to the market; they currently range from income tax preparation services to on-line gaming companies. SaaS fits into cloud markets based on the concept of work being sent to outside party and results returned, without the sender having knowledge of exactly how those results are generated. For some users, SaaS may inherently make sense. Ultimately the best way to help users adopt HPC applications may be to make them Somebody Else’s Problem.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industry updates delivered to you every week!

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.  Nvidia's DGX Read more…

Call for Participation in Workshop on Potential NSF CISE Quantum Initiative

March 26, 2024

Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization

March 25, 2024

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…

MLPerf Inference 4.0 Results Showcase GenAI; Nvidia Still Dominates

March 28, 2024

There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…

Q&A with Nvidia’s Chief of DGX Systems on the DGX-GB200 Rack-scale System

March 27, 2024

Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…

NVLink: Faster Interconnects and Switches to Help Relieve Data Bottlenecks

March 25, 2024

Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…

Who is David Blackwell?

March 22, 2024

During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…

Nvidia Looks to Accelerate GenAI Adoption with NIM

March 19, 2024

Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…

The Generative AI Future Is Now, Nvidia’s Huang Says

March 19, 2024

We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…

Nvidia’s New Blackwell GPU Can Train AI Models with Trillions of Parameters

March 18, 2024

Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…

Nvidia Showcases Quantum Cloud, Expanding Quantum Portfolio at GTC24

March 18, 2024

Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…

Alibaba Shuts Down its Quantum Computing Effort

November 30, 2023

In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…

Nvidia H100: Are 550,000 GPUs Enough for This Year?

August 17, 2023

The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…

Shutterstock 1285747942

AMD’s Horsepower-packed MI300X GPU Beats Nvidia’s Upcoming H200

December 7, 2023

AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades. AMD has claimed it Read more…

DoD Takes a Long View of Quantum Computing

December 19, 2023

Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…

Synopsys Eats Ansys: Does HPC Get Indigestion?

February 8, 2024

Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…

Choosing the Right GPU for LLM Inference and Training

December 11, 2023

Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…

Intel’s Server and PC Chip Development Will Blur After 2025

January 15, 2024

Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…

Baidu Exits Quantum, Closely Following Alibaba’s Earlier Move

January 5, 2024

Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…

Leading Solution Providers

Contributors

Comparing NVIDIA A100 and NVIDIA L40S: Which GPU is Ideal for AI and Graphics-Intensive Workloads?

October 30, 2023

With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…

Shutterstock 1179408610

Google Addresses the Mysteries of Its Hypercomputer 

December 28, 2023

When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…

AMD MI3000A

How AMD May Get Across the CUDA Moat

October 5, 2023

When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…

Shutterstock 1606064203

Meta’s Zuckerberg Puts Its AI Future in the Hands of 600,000 GPUs

January 25, 2024

In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…

Google Introduces ‘Hypercomputer’ to Its AI Infrastructure

December 11, 2023

Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…

China Is All In on a RISC-V Future

January 8, 2024

The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…

Intel Won’t Have a Xeon Max Chip with New Emerald Rapids CPU

December 14, 2023

As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…

IBM Quantum Summit: Two New QPUs, Upgraded Qiskit, 10-year Roadmap and More

December 4, 2023

IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…

  • arrow
  • Click Here for More Headlines
  • arrow
HPCwire