Testing the Cloud: Assuring Availability

By Joe Barry

August 16, 2011

Cloud computing is changing how IT services are delivered and consumed today. The ability for enterprises large and small to centralize and outsource increasingly complex IT infrastructure, while at the same time consuming cloud-based IT services on an on-demand basis, promises to transform the economics of doing business.

But, note that I state “promises” as even though there are many success stories amongst early adopters, the real test will come when cloud computing becomes the de-facto model for IT service delivery and consumption. By all accounts, mainstream adoption of cloud services is close at hand.

In essence, cloud computing is entering a new phase in its development, where assuring the availability and quality of cloud services will become a major challenge. Preparing for this now will ensure that cloud computing continues to deliver on its “promise”.

From excess to scarce resources

Cloud computing was initially driven by excess computing capacity. Large web companies, such as Amazon and Google, that had to build large data center capacity for their own business, saw an opportunity to provide their excess capacity as a service to others. This has been so successful that these cloud services, such as Amazon Web Services, have become a business in themselves.

Yet, as these services become more popular, demand will tend to outstrip supply, especially as some of the enablers of cloud service adoption, such as higher speed access connections, continue to grow in capacity. Simply adding more servers and higher speed networks is effective, but costly and can undermine one of the main reasons for using cloud services, namely cost reduction.  Cloud service providers will thus face the dilemma of managing demand for scarcer computing resources while at the same time maintaining a low, or at least competitive, cost level.

In other words, how can cloud service providers meet mainstream demand cost-effectively?

Efficiently Assuring Service Availability

Cloud services come in many shapes and sizes, from private clouds to public clouds with software-, platform- and infrastructure-as-a-service. Nevertheless, all these flavors of cloud service have a common need to assure service availability and do so as efficiently and cost effectively as possible.

Many cloud services already provide service availability monitoring tools, but these are often limited to monitoring of server or service up-time. Server or service up-time is but one of the aspects of service availability that need to be addressed as cloud services are dependent on much more than just the physical or virtual server on which they reside. Increasingly, the data communication infrastructure supporting the cloud service from the provider to the consumer also needs to be assured even though this might be outside the direct control of the service provider.  

To ensure mainstream adoption of cloud services, consumers must be confident that the services that are required or the data that is hosted by cloud services is available quickly when and where they need it. Otherwise, why not continue with current approaches? Mainstream consumers are noted for being more conservative and pragmatic in their choice of solutions, so addressing this concern must be a top priority for continued expansion of cloud service adoption.

Therefore, building the infrastructure to test and monitor cloud services is essential.

Testing and monitoring cloud services

From a testing and monitoring perspective, there are a number of layers one can address:

•    The Wide Area Network (WAN) providing data communication services between the enterprise customer and the cloud service – fundamental to service assurance and testing of end-to-end service availability

•    The data center infrastructure comprising servers and data communication between servers (LAN), where service availability and uptime of this equipment is key as well as efficient use of resources to ensure service efficiency

•    The monitoring infrastructure in the data center that is the basis for service assurance which itself needs to efficient

•    The individual servers and monitoring appliances that are based on servers that must also follow efficiency and availability principles to assure overall service efficiency and service availability
 
Testing end-to-end

The first test that can be performed is testing end-to-end availability. At a basic level, this involves testing connectivity, but can also involve some specific testing relevant for cloud services, such as latency measurement. Several commercial systems exist for testing latency in a WAN environment. These are most often used by financial institutions to determine the time it takes to execute financial transactions with remote stock exchanges, but can also be used by cloud service providers to test the latency of the connection to enterprise customers.

This solution requires the installation at the enterprise of a network appliance for monitoring latency, which could also be used to test connectivity. Such an appliance could also be used for troubleshooting and SLA monitoring.

Typically the cloud service provider does not own the WAN data communication infrastructure. However, using network monitoring and analysis appliances at both the data center and the enterprise, it is possible to measure the performance of the WAN in providing the data communication service required. The choice of WAN data communication provider should also be driven by the ability of this provider to provide performance data in support of agreed SLAs. In other words, this provider should have the monitoring and analysis infrastructure in place to assure services.

From reaction to service assurance

Network monitoring and analysis of the data center infrastructure is also crucial as cloud service providers need to rely less on troubleshooting and more on service assurance strategies. In typical IT network deployments, a reactive strategy is preferred whereby issues are dealt with in a troubleshooting manner as they arise. For enterprise LAN environments, this can be acceptable in many cases, as some downtime can be tolerated. However, for cloud service providers, downtime is a disaster! If customers are not confident in the cloud service provider’s ability to assure service availability, they will be quick to find alternatives or even revert to a local installation.

A service assurance strategy involves constant monitoring of the performance of the network and services so that issues can be identified before they arise. Network and application performance monitoring tools are available from a number of vendors for precisely this purpose.

The power of virtualization

One of the technology innovations of particular use to cloud service providers is virtualization. The ability to consolidate multiple cloud services onto as few physical servers as possible provides tremendous efficiency benefits by lower cost, space and power consumption. In addition, the ability to move virtual machines supporting cloud services from one physical server to another allows efficient use of resources in matching time-of-day demand, as well as allowing fast reaction to detected performance issues.

One of the consequences of this consolidation is the need for higher speed interfaces as more data needs to be delivered to each server. This, in turn requires that the data communication infrastructure is dimensioned to provide this data, which in turn demands that the network monitoring infrastructure can keep up with the data rates without losing data. This is far from a given, so cloud service providers need to pay particular attention to the throughput performance of network monitoring and analysis appliances to ensure that they can keep up also in the future.

Within the virtualized servers themselves, there are also emerging solutions to assist in monitoring performance. Just as network and application performance monitoring appliances are available to monitor the physical infrastructure, there are now available virtualized versions of these applications for monitoring virtual applications and communication between virtual machines.

There are also virtual test applications that allow a number of virtual ports to be defined that can be used for load-testing in a cloud environment. This is extremely useful for testing whether a large number of users can access a service without having to deploy a large test network. An ideal tool for cloud service providers.

Bringing virtualization to network monitoring and analysis

While virtualization has been used to improve service efficiency, the network monitoring and analysis infrastructure is still dominated by single server implementations. In many cases, this is because the network monitoring and analysis appliance requires all the processing power it can get. However, there are opportunities to consolidate appliances, especially as servers and server CPUs increase performance on a yearly basis.

Solutions are now available to allow multiple network monitoring and analysis applications to be hosted on the same physical server. If all the applications are based on the same operating system, intelligent network adapters have the ability to ensure that data is shared between these applications, which often need to analyze the same data at the same time, but for different purposes.

However, for situations where the applications are based on different operating systems, virtualization can be used to consolidate them onto a single physical server. Demonstrations have shown that up to 32 applications can thus be consolidated using virtualization.

By pursuing opportunities for consolidation of network monitoring and analysis appliances, cloud service providers can further improve service efficiency.

Preparing for mainstream adoption

Mainstream adoption of cloud services is just around the corner and to take full advantage of this demand, cloud service providers can use the existing tools and concepts described above to assure service availability in a cost effective and efficient manner.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “pre-exascale” award), parsed out additional information ab Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid whoops and hollers from the crowd, Thomas Sterling presented t Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out plans to push deeper into climate science and develop more gran Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale companies and their embrace of AI and deep learning – tha Read more…

By Doug Black

HPE Extreme Performance Solutions

Creating a Roadmap for HPC Innovation at ISC 2017

In an era where technological advancements are driving innovation to every sector, and powering major economic and scientific breakthroughs, high performance computing (HPC) is crucial to tackle the challenges of today and tomorrow. Read more…

At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Can

June 22, 2017

I've seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network designed to emulate and compete with the human brain. In thi Read more…

By Doug Black

Cray Brings AI and HPC Together on Flagship Supers

June 20, 2017

Cray took one more step toward the convergence of big data and high performance computing (HPC) today when it announced that it’s adding a full suite of big data and artificial intelligence software to its top-of-the-l Read more…

By Alex Woodie

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

June 20, 2017

AMD is charging back into the enterprise datacenter and select HPC workflows with its new EPYC 7000 processor line, code-named Naples, announced today at a “global” launch event in Austin TX. In many ways it was a fu Read more…

By John Russell

Hyperion: Deep Learning, AI Helping Drive Healthy HPC Industry Growth

June 20, 2017

To be at the ISC conference in Frankfurt this week is to experience deep immersion in deep learning. Users want to learn about it, vendors want to talk about it, analysts and journalists want to report on it. Deep learni Read more…

By Doug Black

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Tsinghua Crowned Eight-Time Student Cluster Champions at ISC

June 22, 2017

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid wh Read more…

By Kim McMahon

GPUs, Power9, Figure Prominently in IBM’s Bet on Weather Forecasting

June 22, 2017

IBM jumped into the weather forecasting business roughly a year and a half ago by purchasing The Weather Company. This week at ISC 2017, Big Blue rolled out pla Read more…

By John Russell

Intersect 360 at ISC: HPC Industry at $44B by 2021

June 22, 2017

The care, feeding and sustained growth of the HPC industry increasingly is in the hands of the commercial market sector – in particular, it’s the hyperscale Read more…

By Doug Black

At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Can

June 22, 2017

I've seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network Read more…

By Doug Black

Cray Brings AI and HPC Together on Flagship Supers

June 20, 2017

Cray took one more step toward the convergence of big data and high performance computing (HPC) today when it announced that it’s adding a full suite of big d Read more…

By Alex Woodie

AMD Charges Back into the Datacenter and HPC Workflows with EPYC Processor

June 20, 2017

AMD is charging back into the enterprise datacenter and select HPC workflows with its new EPYC 7000 processor line, code-named Naples, announced today at a “g Read more…

By John Russell

Hyperion: Deep Learning, AI Helping Drive Healthy HPC Industry Growth

June 20, 2017

To be at the ISC conference in Frankfurt this week is to experience deep immersion in deep learning. Users want to learn about it, vendors want to talk about it Read more…

By Doug Black

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

Leading Solution Providers

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This