CTO Panel: Are Public Clouds Ripe for Mission Critical Applications?

By Nicole Hemsoth

February 15, 2011

This week we gathered the opinions of five technical leaders at cloud service companies to gauge their views on customer reception of the idea of placing mission-critical applications on public cloud resources. Chief Technical Officers from smaller public cloud-focused companies, including Stelligent, Hyperstratus, Appirio, Arcus Global,and Nube Technologies, weighed in on their sense of customer acceptance of putting core applications in the cloud.

Just as important as the initial question about viability is a secondary query—for those that did decide to send mission-critical apps to the public cloud, what was the driving factor?

A number of surveys have been conducted over the course of the past year to gauge general sentiments about placing business-critical or mission-critical applications in the cloud. More specifically, on a public cloud resource such as that offered by Amazon Web Services.

Although survey data varies according to the respondent base, the consensus seems to be that there is still quite a bit of hesitancy to place mission-critical applications in an environment where there is not a complete sense of control—not to mention concerns about data protection and location, compliance and regulatory risks, fear of lock-in…the list tends to go on.

One recent survey conducted by ESG Research found that of the 600 American and European IT professionals questioned, 42% said that public clouds would not enter into their business models in the next five years. Among the top reasons listed were, perhaps not surprisingly, data and privacy concerns (43%), loss of control (32%), existing investments in current infrastructure (also at 32%), the need to feel that the cloud ecosystem is mature before diving in (29%) and 28% responded that were satisfied with their current infrastructure currently.

While conversations with enterprise IT leaders often follow this same trajectory in terms of response, the time seemed ripe to check in with technical leaders at a number of cloud services companies to see if their sense of customer concerns about placing mission-critical applications in the cloud matched with the hesitant reflected in the survey data.

In addition to gauging their sense of the climate for mission-critical applications running on public cloud resources, we also asked a secondary question—“is it a ‘tough sell’ for customers to put business critical applications on such resources and when it is not, what is the motivating factor?”

To provide some depth to the issue of the viability of mission-critical applications for public clouds (and what does eventually tip the scale for some companies to make that decision) we gathered opinions from Lars Malmqvist, CTO and Director of Arcus Global Ltd.; Sonal Goyal, CTO/CEO Nube Technologies; Paul Duval, CTO at Stelligent; Glenn Weinstein, CTO at Appirio, and Bernard Golden from HyperStratus.

We’ll start with sentiments from a company that has experience dealing with public sector clients, Arcus Global Ltd.

Lars Malmqvist serves as Director and CTO at Arcus Global Ltd., a company that deals specifically with the needs of public sector clients in the UK. The company supports pilots, migration, development and planning for cloud computing projects for large government organizations. This public sector focus made the company a natural choice for the question of whether or not the concerns outweigh the benefits for core applications on public cloud resources since governments everywhere are approaching the concept of clouds with caution.

Lars provided a unique perspective as well because in his experience, it is difficult to keep pace with the demand to put mission-critical applications in the cloud. Lars writes…

“At Arcus we work exclusively with public sector clients. If you’ve been anywhere near government ICT recently, you’ll know that cloud comes up in just about any conversation you have. Different groups respond differently to it: the managers love it for the cost savings, technical people tend to find it interesting and a bit threatening, while the security people really don’t seem to like it much at all.

That being said, on a day-to-day level far from being a tough sell, the appetite our clients have for putting systems and applications on a public cloud infrastructure far outstrips our ability to actually deliver it in practice. We literally have organisations that would be willing to move their entire core infrastructure to the public cloud tomorrow if we could solve the technical, legal, and security challenges.

The constraints are well known and mainly revolve around security and compliance. Simply put for some categories of data we simply don’t know what the compliance requirements are for putting data on the public cloud. Best practice and guidance has yet to mature and laws always lag behind technology.

In the UK, the biggest challenge at the moment is around IL3 (Impact Level 3) data, which to put it crudely is data that is sensitive enough to really mess someone’s life up or to cause significant disruption to public services.

The existing security guidance simply doesn’t map onto a cloud infrastructure in a neat way. Therefore moving something like a system supporting adult social services to the public cloud would require the organization to make an independent risk assessment and be willing to stand by it in the face of external scrutiny. Few organizations in the public sector are quite that risk tolerant.

That being said government bodies across the world are working on resolving such issues. The pressure is on to cut costs and everyone in government ICT seems to be looking at the cloud to deliver them.

My expectation would be that within 12 to 18 months these issues will be resolved and clear guidance will be given from central government bodies and their regional equivalents on how to proceed even with highly sensitive data. When that is in place I would expect a mass exodus of in-house systems business critical or not from at least local and regional government.”

Sonal Goyal is CTO and CEO at Nube Technologies, a provider of cloud solutions for large-scale analytics and big data problems. Nube’s HIHO, a Hadoop connector for databases and data sources, is an innovative framework that allows customers to move data to and from Hadoop clusters. The company is focused on data mining and analytics using Elastic MapReduce, Hive, Cassandra and related tools–the solutions behind handling both structured and unstructured data at the large scale. Of the viability of public cloud resources for complex mission-critical applications, Sonal Goyal writes:

“I strongly believe that public cloud usage will grow phenomenally for mission critical business applications and data. The two main concerns organizations have about moving critical pieces to the cloud are security and vendor lock-in.

Companies have been cautious about moving sensitive data to a public cloud for fear of information security–data being used by unauthorized channels. Data governance and ability to audit and monitor data are also genuine concerns. Organizations have been worried that cloud providers like Amazon, GoGrid, Rackspace, Google, Microsoft etc who share their infrastructure offer little support in this direction. The second concern is vendor lock-in. Current cloud providers do not offer a unified approach to seamlessly use their services across any provider. Organizations would like to safeguard against this, they would like the flexibility of being able to move critical applications from one cloud to another.  

I believe that these concerns, though valid, will slowly alleviate themselves. These are the same concerns companies had over outsourcing, but we now outsource payroll and legal, key business processes, even medical transcription across countries. Cloud providers are increasingly providing offerings for virtual private clouds and reserved infrastructure, such that organizations do not need to share if they don’t want to. Encryption, password less logins, firewalls etc offered currently already offer some levels of data security.

Recent technical innovations like SecureCloud from TrendMicro and CipherCloud are first steps in making things better. On the interoperability issue, NIST is already working on the SAJACC project and things should get addressed soon. On the API level, there are efforts like DeltaCloud to make things easier.

The needs of the business, the agility required by the market, the ever exploding data and need for more and more capacity will drive this change. As businesses grow, they will have to rely more and more on public clouds. Organizations cannot afford to make massive upfront investments on infrastructure and support personnel. The pay as you go model offered by cloud providers will soon be rampant, and force businesses to re evaluate their key components and move them to the cloud. They will demand the cloud providers  better security standards and uniformity, and they will get it.”

Glenn Weinstein is CTO at Appirio, a cloud solutions company that delivers both projects and professional services to customers with mission-critical needs. Glenn writes:

“We are definitely seeing large enterprises moving widespread mission- and business-critical operations to the public cloud.  It’s not necessarily a tough sell, particularly to CIOs who have already recognized the value of looking first to public cloud solutions to emerging business problems.

 By moving applications to the public cloud, enterprises delegate significant portions of many non-business specific concerns, including scalability, performance, security, deployment, failover, backup, load balancing and interoperability to large firms specializing in technology.

This frees up IT resources to focus nearly all their time and energy on using that technology to solve business problems. In this way, public cloud computing finally offers a solution to the long-standing dilemma of IT spending upwards of 70 percent of its budget on routine maintenance and operations. Shifting to the public cloud allows CIOs to flip this ratio and spend 70 percent or more on business analysis and process improvement.

As public cloud leaders like Salesforce.com and Google Apps gain widespread acceptance and experience rapid customer growth, more technology professionals and CIOs are experiencing the benefits to IT first-hand, lending credibility to public cloud claims about speeding up development processes and lowering costs. With a taste of this success, they are anxious to push additional projects into the cloud, at the same time that the vendors are greatly expanding their platform-as-a-service (PaaS) offerings.  We expect this growth to accelerate as CIOs recognize not only the total cost benefits, but also the speed-to-market improvements.”

Paul Duvall is CTO at Stelligent, which provides “Continuous Delivery Services-Continuous Delivery Operations Centers for large companies using cloud computing resources.” They have experience handling cloud implementations on Amazon’s servers and have worked with a number of customers to get their applications running in a public cloud environment. Paul writes:

“Our customers (health care, financial, real estate) typically employ a hybrid model by using the public cloud for their numerous non-production environments and a private cloud, or the traditionally hosted approach, for production systems. Considering this hybrid approach was barely a consideration for our customers just a few years ago, I see the trend toward moving systems to a public cloud continuing to gain speed.

Notably, we’ve found that automation is the key to getting the most out of moving to the cloud for mission-critical systems. For example, if you need to install database or application containers every time you stand up a new instance, you’re not getting the kinds of productivity gains that you’d achieve by automating the environment instantiation. Automating the provisioning and deployment provides the organization enormous flexibility to release their software wherever and whenever they choose.

Some customers are concerned about data security, etc. and whether a public cloud provider increases their vulnerability. If the customer’s concern is simply a lack of trust that the public cloud vendor will keep their data safe, we illustrate the various security processes and mechanisms applied by the public cloud vendor, and suggest applying appropriate application security techniques through encryption as they might normally do in any system that has identifiable information.

The infamous quote “trust, but verify” is quite applicable to the increasing trend of companies moving their mission-critical systems from internally-hosted infrastructures to a public cloud. It’s shocking that some large organizations apply implicit trust to their own operations teams who manage their systems. Yet, if many of these organizations performed an internal security audit, the more respected public cloud vendors would win “hands down” in terms of processes, security accreditation, etc. every single time.”

Bernard Golden leads HyperStratus, a company that helps organizations take advantage of cloud architectures via their advice about infrastructure, provider, application and other choices customers must make. Given his experience working with enterprise customers at every stage of the cloud deployment process, he has had time to form some firm opinions about the viability of public clouds for mission-critical applications. Golden states:

“Many organizations have reservations about putting critical business applications in the cloud. The primary concern that is raised about public cloud computing is security, although it often turns out that the term security is used, but the concern actually focuses on compliance or risk exposure. Our belief and experience is that public cloud computing is viable today for many mission-critical applications.

The primary motivation for application groups to embrace a public cloud alternative is dissatisfaction with the current internal data center offering, for reasons of lack of responsiveness or cost. One client of ours, a Fortune 500 company in the information services industry, considered the corporate data center, but decided to pursue a public cloud option because it would reduce their cost on the order of 75%. Nonetheless, convincing large organizations to use a public cloud infrastructure is often difficult and many are not yet ready to pursue such a choice.

Our expectation is that use of public cloud computing by large organizations will gradually increase as they become more familiar and comfortable with that decision. A galvanizing event for making such a decision is to see a peer organization succeed with a similar application that is being considered for placement in the public cloud.”

This concludes our round of gathered views on the subject but we’d like your input. Whether you’re a cloud vendor or end user weighing the benefits versus risks of public cloud resource there are very likely at least a few elements of the ideas presented here that you agree or disagree with. Are public clouds ready for the responsibility and do you feel that the time is right to place trust in the clouds?

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AWS Embraces FPGAs, ‘Elastic’ GPUs

December 2, 2016

A new instance type rolled out this week by Amazon Web Services is based on customizable field programmable gate arrays that promise to strike a balance between performance and cost as emerging workloads create requirements often unmet by general-purpose processors. Read more…

By George Leopold

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Weekly Twitter Roundup (Dec. 1, 2016)

December 1, 2016

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

HPC Career Notes (Dec. 2016)

December 1, 2016

In this monthly feature, we’ll keep you up-to-date on the latest career developments for individuals in the high performance computing community. Read more…

By Thomas Ayres

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

IBM and NSF Computing Pioneer Erich Bloch Dies at 91

November 30, 2016

Erich Bloch, a computational pioneer whose competitive zeal and commercial bent helped transform the National Science Foundation while he was its director, died last Friday at age 91. Bloch was a productive force to be reckoned. During his long stint at IBM prior to joining NSF Bloch spearheaded development of the “Stretch” supercomputer and IBM’s phenomenally successful System/360. Read more…

By John Russell

Pioneering Programmers Awarded Presidential Medal of Freedom

November 30, 2016

In an awards ceremony on November 22, President Barack Obama recognized 21 recipients with the Presidential Medal of Freedom, the Nation’s highest civilian honor. Read more…

By Tiffany Trader

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Intel Details AI Hardware Strategy for Post-GPU Age

November 21, 2016

Last week at SC16, Intel revealed its product roadmap for embedding its processors with key capabilities and attributes needed to take artificial intelligence (AI) to the next level. Read more…

By Alex Woodie

SC Says Farewell to Salt Lake City, See You in Denver

November 18, 2016

After an intense four-day flurry of activity (and a cold snap that brought some actual snow flurries), the SC16 show floor closed yesterday (Thursday) and the always-extensive technical program wound down today. Read more…

By Tiffany Trader

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

HPE Gobbles SGI for Larger Slice of $11B HPC Pie

August 11, 2016

Hewlett Packard Enterprise (HPE) announced today that it will acquire rival HPC server maker SGI for $7.75 per share, or about $275 million, inclusive of cash and debt. The deal ends the seven-year reprieve that kept the SGI banner flying after Rackable Systems purchased the bankrupt Silicon Graphics Inc. for $25 million in 2009 and assumed the SGI brand. Bringing SGI into its fold bolsters HPE's high-performance computing and data analytics capabilities and expands its position... Read more…

By Tiffany Trader

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Leading Solution Providers

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

  • arrow
  • Click Here for More Headlines
  • arrow
Share This