Licensed to Bill?

By Addison Snell, InterSect360

April 8, 2010

HPC Application Software Vendors Begin to Adapt to the Demands of Utility, SaaS, and Cloud Computing

At a casual glance it seems the entire High Performance Computing industry is going gung ho on clouds. HPC system vendors are launching cloud-enabled infrastructure services. Middleware providers offer solutions for migrating applications to public, private, or hybrid clouds. And end users are intrigued to learn how they might maintain capability while reducing capital expenditures. “You mean I don’t have to worry about the cost and maintenance of all that infrastructure? Sign me up!”

Amidst the seemingly ubiquitous fanfare announcing the arrival of a new can’t-lose paradigm, it would be almost forgivable to overlook what should be a plot-turning question. What, specifically, are you going to run in the cloud?
 
Faced with mounting pressure from partners and end users in the HPC community, application software vendors are striving to cope with what it means to offer their software as a service. To learn more about the potential for cloud expansion, InterSect360 Research has been conducting a study of the outlook of SaaS and cloud computing models among the HPC ISV community.
 
One thing is clear: ISVs are nearly unanimous in recognizing a customer demand for cloud computing models of some type, and they also generally recognize cloud as a growth opportunity, at least in the long term. However, there are significant limitations hindering the potential transition to clouds.
 
ISV Applications in the Cloud, Now and Then
 
With so much potential interest from end users, many application software vendors have already implemented flexible licensing models that allow cloud or utility computing access. The majority of ISVs interviewed thus far have already implemented some type of utility, SaaS, grid, or cloud licensing option for at least one of their HPC applications, and the industry has already recognized some utility computing successes. HPCwire recently publicized the use of Exa PowerFLOW computational fluid dynamics simulations in optimizing the performance of the eventual gold medal–winning U.S. four-man bobsled; that software was run on hardware leased through the IBM OnDemand program.
 
But this is not a new phenomenon. Grid computing has been around for more than 10 years, and utility computing models predate grids. IBM has been offering OnDemand services for years, and other hardware vendors have (or used to have) similar programs. And many new “cloud” offerings are based on repackaged, remarketing grid technologies that are suddenly gaining new attention.
 
There is certainly new technology in cloud; in particular, the ability to use a web browser to gain access to resources distinguishes cloud from grid. InterSect360 Research defines cloud computing as accessing part of an organization’s IT infrastructure or workflow through a web (or web-like) interface. This definition uses the web interface (or “web-like,” in the case of some intranets) to distinguish cloud from grid and other utility computing methodologies, and it specifies applications by what role they play in the organization. At the boundary, an application like Salesforce.com replaces part of an organization’s workflow and can be considered a cloud application, whereas the Fishville game on Facebook is a Web 2.0 application but not part of the player’s IT infrastructure or workflow, and therefore Fishville is not considered to be cloud computing.
 
Precise definitions notwithstanding, there seems to be a clear opportunity for offering utility or SaaS licensing models. Yet even among those ISVs that are actively pursuing cloud, most don’t see the opportunity exceeding 10% of their software revenue in the next two to three years, due to the inherent barriers in adoption.
 
Barriers to HPC SaaS
 
Across all vertical markets in HPC, the software vendors we interviewed consistently named two concerns that prevent organizations from running applications in the cloud: data movement and data security. These issues are potential problems for any application, but for commercial ISV codes they can be crucial, because end users cannot risk the loss of control of their core intellectual property.
 
“Our customers are asking us for cloud implementations of our software, but design security remains a significant barrier,” said Andy Biddle, Product Marketing Director at Magma Design Automation, makers of the Talus and Titan applications for EDA markets. “We don’t think cloud models will contribute significantly to our revenue this year or next year. Maybe in five or ten years, but not soon.”
 
Another factor cited as a major hurdle by some ISVs but not at all by others is the creation of the licensing models, including a methodology for protecting the licenses in the cloud. The bifurcation stems from the differences in how applications are sold in the absence of cloud: ISVs that have time-based or site-based licensing schemes tend not to have a problem with utility licensing, whereas those that have licensed applications strictly by core, socket, or node tend to see the creation of cloud-friendly licensing models as a barrier.
 
These barriers are not insurmountable. ANSYS is one example of a company that has recently introduced new HPC licensing options for its customers, designed to enable use of its software on hardware located anywhere, by end users located anywhere. But in this case, ANSYS still does not see its users clamoring to relinquish complete control of their codes.
 
“Our customers involved in engineering simulation clearly need flexibility to access computing infrastructure however it makes most sense for them – down the hall, across the planet, rented in the ‘cloud’ or owned, ” said Barbara Hutchings, Director of Strategic Partnerships at ANSYS. “They need flexibility to use licenses wherever hardware is available and to address peak-capacity needs. ANSYS and our HPC industry partners support this flexible deployment today with the goal of enabling more customers to use HPC and gain enhanced insight to drive product development decisions.”
 
SaaS: A Business Opportunity for ISVs?
 
For end users who are adopting cloud, there is a question then of what parts of their infrastructure or workflow they wish to outsource. Platform as a service (PaaS) or infrastructure as a service (IaaS) models do not necessarily imply the outsourcing of software, and private clouds do not necessarily imply that organizations are leasing instead of owning. This distinctions between IaaS and SaaS and between public and private clouds is currently a significant limiting factor in the move toward HPC SaaS. Many organizations are implementing internal clouds, but they are using licenses they already have – site licenses, time-based licenses, or even token-based usage licenses – to run their ISV applications internally on a utility basis. In the case of private clouds, the hardware and software might all be owned, but the end user within the organization is ambivalent to the back-end infrastructure.
 
Similarly, IaaS models move organizations into cloud computing in a way that does not imply SaaS. In some cases they may have hardware on-site that is leased on an as-used or on-demand basis, but the software applications are owned. This type of workflow is cloud but not SaaS and does not require any modification in ISV licensing approaches.
 
An open question then is whether cloud computing, via SaaS, represents an increase in the total business opportunity for ISVs. Here it is important to emphasize that cloud computing is not a market or industry in itself. Rather it is a methodology for accessing part of an infrastructure or workflow that in most cases already existed. That is, users were already running the applications one way, and now they’re going to run them a different way.
 
That said there is the potential for organizations to realize increased application usage through SaaS. One scenario for this is “cloud-bursting” – using either a public or private cloud to access additional cycles during peak workload times. This structure appeals to established HPC application users who want to do more than their current infrastructure allows without increasing their capital outlay, and it may represent the most significant near-term opportunity. But although cloud computing is currently in vogue, utility computing models have offered this benefit in the past, and it has never become a significant dynamic across the HPC industry.
 
Another potential business model is for cloud computing to enable new entrants into HPC by reducing the costs associated with hardware and software. This is a nice idea but falls short of addressing some of the more significant barriers for new entrants to HPC: creation of digital models and synchronization with physical testing, plus the considerable social aspect of changing a workflow within an organization. Merely reducing cost is a necessary but insufficient condition for driving HPC adoption, and until an ISV (or another type of host) is capable of offering a more complete “digital workflow as a service,” it will be difficult for SaaS alone to drive new HPC adoption.
 
Yes, cloud is a major phenomenon in HPC. Yes, ISVs are under a lot of pressure to offer SaaS and utility licensing models. Yes, many of them have reacted to this already. But nevertheless, the application software vendor community is probably correct in predicting that cloud will not have a dramatic impact on their business in the immediate term, as the majority of cloud adopters explores private clouds and IaaS models before it moves to HPC SaaS.
 
For now, the IT community is running hell-for-leather to adopt cloud computing. As for what they’ll actually run in the cloud? By the time the user community is ready to move its data into the cloud, application software vendors should be ready.
Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension around the potential changes that could affect or disrupt Lustre Read more…

By Carlos Aoki Thomaz

Inventor Claims to Have Solved Floating Point Error Problem

January 17, 2018

"The decades-old floating point error problem has been solved," proclaims a press release from inventor Alan Jorgensen. The computer scientist has filed for and received a patent for a "processor design, which allows rep Read more…

By Tiffany Trader

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitive computing, memory-centric computing, high-speed communicat Read more…

By John Russell

HPE Extreme Performance Solutions

HPE and NREL Take Steps to Create a Sustainable, Energy-Efficient Data Center with an H2 Fuel Cell

As enterprises attempt to manage rising volumes of data, unplanned data center outages are becoming more common and more expensive. As the cost of downtime rises, enterprises lose out on productivity and valuable competitive advantage without access to their critical data. Read more…

US Seeks to Automate Video Analysis

January 16, 2018

U.S. military and intelligence agencies continue to look for new ways to use artificial intelligence to sift through huge amounts of video imagery in hopes of freeing analysts to identify threats and otherwise put their Read more…

By George Leopold

Fostering Lustre Advancement Through Development and Contributions

January 17, 2018

Six months after organizational changes at Intel's High Performance Data (HPDD) division, most in the Lustre community have shed any initial apprehension aroun Read more…

By Carlos Aoki Thomaz

SRC Spends $200M on University Research Centers

January 16, 2018

The Semiconductor Research Corporation, as part of its JUMP initiative, has awarded $200 million to fund six research centers whose areas of focus span cognitiv Read more…

By John Russell

When the Chips Are Down

January 11, 2018

In the last article, "The High Stakes Semiconductor Game that Drives HPC Diversity," I alluded to the challenges facing the semiconductor industry and how that may impact the evolution of HPC systems over the next few years. I thought I’d lift the covers a little and look at some of the commercial challenges that impact the component technology we use in HPC. Read more…

By Dairsie Latimer

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

Momentum Builds for US Exascale

January 9, 2018

2018 looks to be a great year for the U.S. exascale program. The last several months of 2017 revealed a number of important developments that help put the U.S. Read more…

By Alex R. Larzelere

ANL’s Rick Stevens on CANDLE, ARM, Quantum, and More

January 8, 2018

Late last year HPCwire caught up with Rick Stevens, associate laboratory director for computing, environment and life Sciences at Argonne National Laboratory, f Read more…

By John Russell

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

The @hpcnotes Predictions for HPC in 2018

January 4, 2018

I’m not averse to making predictions about the world of High Performance Computing (and Supercomputing, Cloud, etc.) in person at conferences, meetings, causa Read more…

By Andrew Jones

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

Japan Unveils Quantum Neural Network

November 22, 2017

The U.S. and China are leading the race toward productive quantum computing, but it's early enough that ultimate leadership is still something of an open questi Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

IBM Begins Power9 Rollout with Backing from DOE, Google

December 6, 2017

After over a year of buildup, IBM is unveiling its first Power9 system based on the same architecture as the Department of Energy CORAL supercomputers, Summit a Read more…

By Tiffany Trader

Fast Forward: Five HPC Predictions for 2018

December 21, 2017

What’s on your list of high (and low) lights for 2017? Volta 100’s arrival on the heels of the P100? Appearance, albeit late in the year, of IBM’s Power9? Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Chip Flaws ‘Meltdown’ and ‘Spectre’ Loom Large

January 4, 2018

The HPC and wider tech community have been abuzz this week over the discovery of critical design flaws that impact virtually all contemporary microprocessors. T Read more…

By Tiffany Trader

Leading Solution Providers

Perspective: What Really Happened at SC17?

November 22, 2017

SC is over. Now comes the myriad of follow-ups. Inboxes are filled with templated emails from vendors and other exhibitors hoping to win a place in the post-SC thinking of booth visitors. Attendees of tutorials, workshops and other technical sessions will be inundated with requests for feedback. Read more…

By Andrew Jones

Tensors Come of Age: Why the AI Revolution Will Help HPC

November 13, 2017

Thirty years ago, parallel computing was coming of age. A bitter battle began between stalwart vector computing supporters and advocates of various approaches to parallel computing. IBM skeptic Alan Karp, reacting to announcements of nCUBE’s 1024-microprocessor system and Thinking Machines’ 65,536-element array, made a public $100 wager that no one could get a parallel speedup of over 200 on real HPC workloads. Read more…

By John Gustafson & Lenore Mullin

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Flipping the Flops and Reading the Top500 Tea Leaves

November 13, 2017

The 50th edition of the Top500 list, the biannual publication of the world’s fastest supercomputers based on public Linpack benchmarking results, was released Read more…

By Tiffany Trader

GlobalFoundries, Ayar Labs Team Up to Commercialize Optical I/O

December 4, 2017

GlobalFoundries (GF) and Ayar Labs, a startup focused on using light, instead of electricity, to transfer data between chips, today announced they've entered in Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

Nvidia, Partners Announce Several V100 Servers

September 27, 2017

Here come the Volta 100-based servers. Nvidia today announced an impressive line-up of servers from major partners – Dell EMC, Hewlett Packard Enterprise, IBM Read more…

By John Russell

How Meltdown and Spectre Patches Will Affect HPC Workloads

January 10, 2018

There have been claims that the fixes for the Meltdown and Spectre security vulnerabilities, named the KPTI (aka KAISER) patches, are going to affect applicatio Read more…

By Rosemary Francis

  • arrow
  • Click Here for More Headlines
  • arrow
Share This