HPC Application Software Vendors Begin to Adapt to the Demands of Utility, SaaS, and Cloud Computing
At a casual glance it seems the entire High Performance Computing industry is going gung ho on clouds. HPC system vendors are launching cloud-enabled infrastructure services. Middleware providers offer solutions for migrating applications to public, private, or hybrid clouds. And end users are intrigued to learn how they might maintain capability while reducing capital expenditures. “You mean I don’t have to worry about the cost and maintenance of all that infrastructure? Sign me up!”
Amidst the seemingly ubiquitous fanfare announcing the arrival of a new can’t-lose paradigm, it would be almost forgivable to overlook what should be a plot-turning question. What, specifically, are you going to run in the cloud?
Faced with mounting pressure from partners and end users in the HPC community, application software vendors are striving to cope with what it means to offer their software as a service. To learn more about the potential for cloud expansion, InterSect360 Research has been conducting a study of the outlook of SaaS and cloud computing models among the HPC ISV community.
One thing is clear: ISVs are nearly unanimous in recognizing a customer demand for cloud computing models of some type, and they also generally recognize cloud as a growth opportunity, at least in the long term. However, there are significant limitations hindering the potential transition to clouds.
ISV Applications in the Cloud, Now and Then
With so much potential interest from end users, many application software vendors have already implemented flexible licensing models that allow cloud or utility computing access. The majority of ISVs interviewed thus far have already implemented some type of utility, SaaS, grid, or cloud licensing option for at least one of their HPC applications, and the industry has already recognized some utility computing successes. HPCwire recently publicized the use of Exa PowerFLOW computational fluid dynamics simulations in optimizing the performance of the eventual gold medal–winning U.S. four-man bobsled; that software was run on hardware leased through the IBM OnDemand program.
But this is not a new phenomenon. Grid computing has been around for more than 10 years, and utility computing models predate grids. IBM has been offering OnDemand services for years, and other hardware vendors have (or used to have) similar programs. And many new “cloud” offerings are based on repackaged, remarketing grid technologies that are suddenly gaining new attention.
There is certainly new technology in cloud; in particular, the ability to use a web browser to gain access to resources distinguishes cloud from grid. InterSect360 Research defines cloud computing as accessing part of an organization’s IT infrastructure or workflow through a web (or web-like) interface. This definition uses the web interface (or “web-like,” in the case of some intranets) to distinguish cloud from grid and other utility computing methodologies, and it specifies applications by what role they play in the organization. At the boundary, an application like Salesforce.com replaces part of an organization’s workflow and can be considered a cloud application, whereas the Fishville game on Facebook is a Web 2.0 application but not part of the player’s IT infrastructure or workflow, and therefore Fishville is not considered to be cloud computing.
Precise definitions notwithstanding, there seems to be a clear opportunity for offering utility or SaaS licensing models. Yet even among those ISVs that are actively pursuing cloud, most don’t see the opportunity exceeding 10% of their software revenue in the next two to three years, due to the inherent barriers in adoption.
Barriers to HPC SaaS
Across all vertical markets in HPC, the software vendors we interviewed consistently named two concerns that prevent organizations from running applications in the cloud: data movement and data security. These issues are potential problems for any application, but for commercial ISV codes they can be crucial, because end users cannot risk the loss of control of their core intellectual property.
“Our customers are asking us for cloud implementations of our software, but design security remains a significant barrier,” said Andy Biddle, Product Marketing Director at Magma Design Automation, makers of the Talus and Titan applications for EDA markets. “We don’t think cloud models will contribute significantly to our revenue this year or next year. Maybe in five or ten years, but not soon.”
Another factor cited as a major hurdle by some ISVs but not at all by others is the creation of the licensing models, including a methodology for protecting the licenses in the cloud. The bifurcation stems from the differences in how applications are sold in the absence of cloud: ISVs that have time-based or site-based licensing schemes tend not to have a problem with utility licensing, whereas those that have licensed applications strictly by core, socket, or node tend to see the creation of cloud-friendly licensing models as a barrier.
These barriers are not insurmountable. ANSYS is one example of a company that has recently introduced new HPC licensing options for its customers, designed to enable use of its software on hardware located anywhere, by end users located anywhere. But in this case, ANSYS still does not see its users clamoring to relinquish complete control of their codes.
“Our customers involved in engineering simulation clearly need flexibility to access computing infrastructure however it makes most sense for them – down the hall, across the planet, rented in the ‘cloud’ or owned, ” said Barbara Hutchings, Director of Strategic Partnerships at ANSYS. “They need flexibility to use licenses wherever hardware is available and to address peak-capacity needs. ANSYS and our HPC industry partners support this flexible deployment today with the goal of enabling more customers to use HPC and gain enhanced insight to drive product development decisions.”
SaaS: A Business Opportunity for ISVs?
For end users who are adopting cloud, there is a question then of what parts of their infrastructure or workflow they wish to outsource. Platform as a service (PaaS) or infrastructure as a service (IaaS) models do not necessarily imply the outsourcing of software, and private clouds do not necessarily imply that organizations are leasing instead of owning. This distinctions between IaaS and SaaS and between public and private clouds is currently a significant limiting factor in the move toward HPC SaaS. Many organizations are implementing internal clouds, but they are using licenses they already have – site licenses, time-based licenses, or even token-based usage licenses – to run their ISV applications internally on a utility basis. In the case of private clouds, the hardware and software might all be owned, but the end user within the organization is ambivalent to the back-end infrastructure.
Similarly, IaaS models move organizations into cloud computing in a way that does not imply SaaS. In some cases they may have hardware on-site that is leased on an as-used or on-demand basis, but the software applications are owned. This type of workflow is cloud but not SaaS and does not require any modification in ISV licensing approaches.
An open question then is whether cloud computing, via SaaS, represents an increase in the total business opportunity for ISVs. Here it is important to emphasize that cloud computing is not a market or industry in itself. Rather it is a methodology for accessing part of an infrastructure or workflow that in most cases already existed. That is, users were already running the applications one way, and now they’re going to run them a different way.
That said there is the potential for organizations to realize increased application usage through SaaS. One scenario for this is “cloud-bursting” – using either a public or private cloud to access additional cycles during peak workload times. This structure appeals to established HPC application users who want to do more than their current infrastructure allows without increasing their capital outlay, and it may represent the most significant near-term opportunity. But although cloud computing is currently in vogue, utility computing models have offered this benefit in the past, and it has never become a significant dynamic across the HPC industry.
Another potential business model is for cloud computing to enable new entrants into HPC by reducing the costs associated with hardware and software. This is a nice idea but falls short of addressing some of the more significant barriers for new entrants to HPC: creation of digital models and synchronization with physical testing, plus the considerable social aspect of changing a workflow within an organization. Merely reducing cost is a necessary but insufficient condition for driving HPC adoption, and until an ISV (or another type of host) is capable of offering a more complete “digital workflow as a service,” it will be difficult for SaaS alone to drive new HPC adoption.
Yes, cloud is a major phenomenon in HPC. Yes, ISVs are under a lot of pressure to offer SaaS and utility licensing models. Yes, many of them have reacted to this already. But nevertheless, the application software vendor community is probably correct in predicting that cloud will not have a dramatic impact on their business in the immediate term, as the majority of cloud adopters explores private clouds and IaaS models before it moves to HPC SaaS.
For now, the IT community is running hell-for-leather to adopt cloud computing. As for what they’ll actually run in the cloud? By the time the user community is ready to move its data into the cloud, application software vendors should be ready.
There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion XL — were added to the benchmark suite as MLPerf continues Read more…
Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing power it brings to artificial intelligence.
Nvidia's DGX Read more…
Editor’s Note: Next month there will be a workshop to discuss what a quantum initiative led by NSF’s Computer, Information Science and Engineering (CISE) directorate could entail. The details are posted below in a Ca Read more…
Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum device Read more…
Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at the network layer threatens to make bigger and brawnier pro Read more…
During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion pa Read more…
There were no startling surprises in the latest MLPerf Inference benchmark (4.0) results released yesterday. Two new workloads — Llama 2 and Stable Diffusion Read more…
Pictures of Nvidia's new flagship mega-server, the DGX GB200, on the GTC show floor got favorable reactions on social media for the sheer amount of computing po Read more…
Nvidia’s new Blackwell architecture may have stolen the show this week at the GPU Technology Conference in San Jose, California. But an emerging bottleneck at Read more…
During GTC24, co-founder and president of NVIDIA Jensen Huang unveiled the Blackwell GPU. This GPU itself is heavily optimized for AI work, boasting 192GB of HB Read more…
Today at the GPU Technology Conference, Nvidia launched a new offering aimed at helping customers quickly deploy their generative AI applications in a secure, s Read more…
We are in the early days of a transformative shift in how business gets done thanks to the advent of generative AI, according to Nvidia CEO and cofounder Jensen Read more…
Nvidia's latest and fastest GPU, codenamed Blackwell, is here and will underpin the company's AI plans this year. The chip offers performance improvements from Read more…
Nvidia’s barrage of quantum news at GTC24 this week includes new products, signature collaborations, and a new Nvidia Quantum Cloud for quantum developers. Wh Read more…
In case you missed it, China’s e-commerce giant Alibaba has shut down its quantum computing research effort. It’s not entirely clear what drove the change. Read more…
The GPU Squeeze continues to place a premium on Nvidia H100 GPUs. In a recent Financial Times article, Nvidia reports that it expects to ship 550,000 of its lat Read more…
AMD and Nvidia are locked in an AI performance battle – much like the gaming GPU performance clash the companies have waged for decades.
AMD has claimed it Read more…
Given the large sums tied to expensive weapon systems – think $100-million-plus per F-35 fighter – it’s easy to forget the U.S. Department of Defense is a Read more…
Recently, it was announced that Synopsys is buying HPC tool developer Ansys. Started in Pittsburgh, Pa., in 1970 as Swanson Analysis Systems, Inc. (SASI) by John Swanson (and eventually renamed), Ansys serves the CAE (Computer Aided Engineering)/multiphysics engineering simulation market. Read more…
Accelerating the training and inference processes of deep learning models is crucial for unleashing their true potential and NVIDIA GPUs have emerged as a game- Read more…
Intel's dealing with much more than chip rivals breathing down its neck; it is simultaneously integrating a bevy of new technologies such as chiplets, artificia Read more…
Reuters reported this week that Baidu, China’s giant e-commerce and services provider, is exiting the quantum computing development arena. Reuters reported � Read more…
With long lead times for the NVIDIA H100 and A100 GPUs, many organizations are looking at the new NVIDIA L40S GPU, which it’s a new GPU optimized for AI and g Read more…
When Google launched its Hypercomputer earlier this month (December 2023), the first reaction was, "Say what?" It turns out that the Hypercomputer is Google's t Read more…
When discussing GenAI, the term "GPU" almost always enters the conversation and the topic often moves toward performance and access. Interestingly, the word "GPU" is assumed to mean "Nvidia" products. (As an aside, the popular Nvidia hardware used in GenAI are not technically... Read more…
In under two minutes, Meta's CEO, Mark Zuckerberg, laid out the company's AI plans, which included a plan to build an artificial intelligence system with the eq Read more…
Google ran out of monikers to describe its new AI system released on December 7. Supercomputer perhaps wasn't an apt description, so it settled on Hypercomputer Read more…
The state of RISC-V in China was discussed in a recent report released by the Jamestown Foundation, a Washington, D.C.-based think tank. The report, entitled "E Read more…
As expected, Intel officially announced its 5th generation Xeon server chips codenamed Emerald Rapids at an event in New York City, where the focus was really o Read more…
IBM kicks off its annual Quantum Summit today and will announce a broad range of advances including its much-anticipated 1121-qubit Condor QPU, a smaller 133-qu Read more…
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.AcceptRejectRead More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.