CTO Panel: Are Public Clouds Ripe for Mission Critical Applications?

By Nicole Hemsoth

February 15, 2011

This week we gathered the opinions of five technical leaders at cloud service companies to gauge their views on customer reception of the idea of placing mission-critical applications on public cloud resources. Chief Technical Officers from smaller public cloud-focused companies, including Stelligent, Hyperstratus, Appirio, Arcus Global,and Nube Technologies, weighed in on their sense of customer acceptance of putting core applications in the cloud.

Just as important as the initial question about viability is a secondary query—for those that did decide to send mission-critical apps to the public cloud, what was the driving factor?

A number of surveys have been conducted over the course of the past year to gauge general sentiments about placing business-critical or mission-critical applications in the cloud. More specifically, on a public cloud resource such as that offered by Amazon Web Services.

Although survey data varies according to the respondent base, the consensus seems to be that there is still quite a bit of hesitancy to place mission-critical applications in an environment where there is not a complete sense of control—not to mention concerns about data protection and location, compliance and regulatory risks, fear of lock-in…the list tends to go on.

One recent survey conducted by ESG Research found that of the 600 American and European IT professionals questioned, 42% said that public clouds would not enter into their business models in the next five years. Among the top reasons listed were, perhaps not surprisingly, data and privacy concerns (43%), loss of control (32%), existing investments in current infrastructure (also at 32%), the need to feel that the cloud ecosystem is mature before diving in (29%) and 28% responded that were satisfied with their current infrastructure currently.

While conversations with enterprise IT leaders often follow this same trajectory in terms of response, the time seemed ripe to check in with technical leaders at a number of cloud services companies to see if their sense of customer concerns about placing mission-critical applications in the cloud matched with the hesitant reflected in the survey data.

In addition to gauging their sense of the climate for mission-critical applications running on public cloud resources, we also asked a secondary question—“is it a ‘tough sell’ for customers to put business critical applications on such resources and when it is not, what is the motivating factor?”

To provide some depth to the issue of the viability of mission-critical applications for public clouds (and what does eventually tip the scale for some companies to make that decision) we gathered opinions from Lars Malmqvist, CTO and Director of Arcus Global Ltd.; Sonal Goyal, CTO/CEO Nube Technologies; Paul Duval, CTO at Stelligent; Glenn Weinstein, CTO at Appirio, and Bernard Golden from HyperStratus.

We’ll start with sentiments from a company that has experience dealing with public sector clients, Arcus Global Ltd.

Lars Malmqvist serves as Director and CTO at Arcus Global Ltd., a company that deals specifically with the needs of public sector clients in the UK. The company supports pilots, migration, development and planning for cloud computing projects for large government organizations. This public sector focus made the company a natural choice for the question of whether or not the concerns outweigh the benefits for core applications on public cloud resources since governments everywhere are approaching the concept of clouds with caution.

Lars provided a unique perspective as well because in his experience, it is difficult to keep pace with the demand to put mission-critical applications in the cloud. Lars writes…

“At Arcus we work exclusively with public sector clients. If you’ve been anywhere near government ICT recently, you’ll know that cloud comes up in just about any conversation you have. Different groups respond differently to it: the managers love it for the cost savings, technical people tend to find it interesting and a bit threatening, while the security people really don’t seem to like it much at all.

That being said, on a day-to-day level far from being a tough sell, the appetite our clients have for putting systems and applications on a public cloud infrastructure far outstrips our ability to actually deliver it in practice. We literally have organisations that would be willing to move their entire core infrastructure to the public cloud tomorrow if we could solve the technical, legal, and security challenges.

The constraints are well known and mainly revolve around security and compliance. Simply put for some categories of data we simply don’t know what the compliance requirements are for putting data on the public cloud. Best practice and guidance has yet to mature and laws always lag behind technology.

In the UK, the biggest challenge at the moment is around IL3 (Impact Level 3) data, which to put it crudely is data that is sensitive enough to really mess someone’s life up or to cause significant disruption to public services.

The existing security guidance simply doesn’t map onto a cloud infrastructure in a neat way. Therefore moving something like a system supporting adult social services to the public cloud would require the organization to make an independent risk assessment and be willing to stand by it in the face of external scrutiny. Few organizations in the public sector are quite that risk tolerant.

That being said government bodies across the world are working on resolving such issues. The pressure is on to cut costs and everyone in government ICT seems to be looking at the cloud to deliver them.

My expectation would be that within 12 to 18 months these issues will be resolved and clear guidance will be given from central government bodies and their regional equivalents on how to proceed even with highly sensitive data. When that is in place I would expect a mass exodus of in-house systems business critical or not from at least local and regional government.”

Sonal Goyal is CTO and CEO at Nube Technologies, a provider of cloud solutions for large-scale analytics and big data problems. Nube’s HIHO, a Hadoop connector for databases and data sources, is an innovative framework that allows customers to move data to and from Hadoop clusters. The company is focused on data mining and analytics using Elastic MapReduce, Hive, Cassandra and related tools–the solutions behind handling both structured and unstructured data at the large scale. Of the viability of public cloud resources for complex mission-critical applications, Sonal Goyal writes:

“I strongly believe that public cloud usage will grow phenomenally for mission critical business applications and data. The two main concerns organizations have about moving critical pieces to the cloud are security and vendor lock-in.

Companies have been cautious about moving sensitive data to a public cloud for fear of information security–data being used by unauthorized channels. Data governance and ability to audit and monitor data are also genuine concerns. Organizations have been worried that cloud providers like Amazon, GoGrid, Rackspace, Google, Microsoft etc who share their infrastructure offer little support in this direction. The second concern is vendor lock-in. Current cloud providers do not offer a unified approach to seamlessly use their services across any provider. Organizations would like to safeguard against this, they would like the flexibility of being able to move critical applications from one cloud to another.  

I believe that these concerns, though valid, will slowly alleviate themselves. These are the same concerns companies had over outsourcing, but we now outsource payroll and legal, key business processes, even medical transcription across countries. Cloud providers are increasingly providing offerings for virtual private clouds and reserved infrastructure, such that organizations do not need to share if they don’t want to. Encryption, password less logins, firewalls etc offered currently already offer some levels of data security.

Recent technical innovations like SecureCloud from TrendMicro and CipherCloud are first steps in making things better. On the interoperability issue, NIST is already working on the SAJACC project and things should get addressed soon. On the API level, there are efforts like DeltaCloud to make things easier.

The needs of the business, the agility required by the market, the ever exploding data and need for more and more capacity will drive this change. As businesses grow, they will have to rely more and more on public clouds. Organizations cannot afford to make massive upfront investments on infrastructure and support personnel. The pay as you go model offered by cloud providers will soon be rampant, and force businesses to re evaluate their key components and move them to the cloud. They will demand the cloud providers  better security standards and uniformity, and they will get it.”

Glenn Weinstein is CTO at Appirio, a cloud solutions company that delivers both projects and professional services to customers with mission-critical needs. Glenn writes:

“We are definitely seeing large enterprises moving widespread mission- and business-critical operations to the public cloud.  It’s not necessarily a tough sell, particularly to CIOs who have already recognized the value of looking first to public cloud solutions to emerging business problems.

 By moving applications to the public cloud, enterprises delegate significant portions of many non-business specific concerns, including scalability, performance, security, deployment, failover, backup, load balancing and interoperability to large firms specializing in technology.

This frees up IT resources to focus nearly all their time and energy on using that technology to solve business problems. In this way, public cloud computing finally offers a solution to the long-standing dilemma of IT spending upwards of 70 percent of its budget on routine maintenance and operations. Shifting to the public cloud allows CIOs to flip this ratio and spend 70 percent or more on business analysis and process improvement.

As public cloud leaders like Salesforce.com and Google Apps gain widespread acceptance and experience rapid customer growth, more technology professionals and CIOs are experiencing the benefits to IT first-hand, lending credibility to public cloud claims about speeding up development processes and lowering costs. With a taste of this success, they are anxious to push additional projects into the cloud, at the same time that the vendors are greatly expanding their platform-as-a-service (PaaS) offerings.  We expect this growth to accelerate as CIOs recognize not only the total cost benefits, but also the speed-to-market improvements.”

Paul Duvall is CTO at Stelligent, which provides “Continuous Delivery Services-Continuous Delivery Operations Centers for large companies using cloud computing resources.” They have experience handling cloud implementations on Amazon’s servers and have worked with a number of customers to get their applications running in a public cloud environment. Paul writes:

“Our customers (health care, financial, real estate) typically employ a hybrid model by using the public cloud for their numerous non-production environments and a private cloud, or the traditionally hosted approach, for production systems. Considering this hybrid approach was barely a consideration for our customers just a few years ago, I see the trend toward moving systems to a public cloud continuing to gain speed.

Notably, we’ve found that automation is the key to getting the most out of moving to the cloud for mission-critical systems. For example, if you need to install database or application containers every time you stand up a new instance, you’re not getting the kinds of productivity gains that you’d achieve by automating the environment instantiation. Automating the provisioning and deployment provides the organization enormous flexibility to release their software wherever and whenever they choose.

Some customers are concerned about data security, etc. and whether a public cloud provider increases their vulnerability. If the customer’s concern is simply a lack of trust that the public cloud vendor will keep their data safe, we illustrate the various security processes and mechanisms applied by the public cloud vendor, and suggest applying appropriate application security techniques through encryption as they might normally do in any system that has identifiable information.

The infamous quote “trust, but verify” is quite applicable to the increasing trend of companies moving their mission-critical systems from internally-hosted infrastructures to a public cloud. It’s shocking that some large organizations apply implicit trust to their own operations teams who manage their systems. Yet, if many of these organizations performed an internal security audit, the more respected public cloud vendors would win “hands down” in terms of processes, security accreditation, etc. every single time.”

Bernard Golden leads HyperStratus, a company that helps organizations take advantage of cloud architectures via their advice about infrastructure, provider, application and other choices customers must make. Given his experience working with enterprise customers at every stage of the cloud deployment process, he has had time to form some firm opinions about the viability of public clouds for mission-critical applications. Golden states:

“Many organizations have reservations about putting critical business applications in the cloud. The primary concern that is raised about public cloud computing is security, although it often turns out that the term security is used, but the concern actually focuses on compliance or risk exposure. Our belief and experience is that public cloud computing is viable today for many mission-critical applications.

The primary motivation for application groups to embrace a public cloud alternative is dissatisfaction with the current internal data center offering, for reasons of lack of responsiveness or cost. One client of ours, a Fortune 500 company in the information services industry, considered the corporate data center, but decided to pursue a public cloud option because it would reduce their cost on the order of 75%. Nonetheless, convincing large organizations to use a public cloud infrastructure is often difficult and many are not yet ready to pursue such a choice.

Our expectation is that use of public cloud computing by large organizations will gradually increase as they become more familiar and comfortable with that decision. A galvanizing event for making such a decision is to see a peer organization succeed with a similar application that is being considered for placement in the public cloud.”

This concludes our round of gathered views on the subject but we’d like your input. Whether you’re a cloud vendor or end user weighing the benefits versus risks of public cloud resource there are very likely at least a few elements of the ideas presented here that you agree or disagree with. Are public clouds ready for the responsibility and do you feel that the time is right to place trust in the clouds?

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

TACC Helps ROSIE Bioscience Gateway Expand its Impact

April 26, 2017

Biomolecule structure prediction has long been challenging not least because the relevant software and workflows often require high end HPC systems that many bioscience researchers lack easy access to. Read more…

By John Russell

Messina Update: The U.S. Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

IBM, Nvidia, Stone Ridge Claim Gas & Oil Simulation Record

April 25, 2017

IBM, Nvidia, and Stone Ridge Technology today reported setting the performance record for a “billion cell” oil and gas reservoir simulation. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

Remote Visualization Optimizing Life Sciences Operations and Care Delivery

As patients continually demand a better quality of care and increasingly complex workloads challenge healthcare organizations to innovate, investing in the right technologies is key to ensuring growth and success. Read more…

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

Musk’s Latest Startup Eyes Brain-Computer Links

April 21, 2017

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers. Read more…

By George Leopold

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Messina Update: The U.S. Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

CERN openlab Explores New CPU/FPGA Processing Solutions

April 14, 2017

Through a CERN openlab project known as the ‘High-Throughput Computing Collaboration,’ researchers are investigating the use of various Intel technologies in data filtering and data acquisition systems. Read more…

By Linda Barney

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Leading Solution Providers

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This