Testing the Cloud: Assuring Availability

By Joe Barry

August 16, 2011

Cloud computing is changing how IT services are delivered and consumed today. The ability for enterprises large and small to centralize and outsource increasingly complex IT infrastructure, while at the same time consuming cloud-based IT services on an on-demand basis, promises to transform the economics of doing business.

But, note that I state “promises” as even though there are many success stories amongst early adopters, the real test will come when cloud computing becomes the de-facto model for IT service delivery and consumption. By all accounts, mainstream adoption of cloud services is close at hand.

In essence, cloud computing is entering a new phase in its development, where assuring the availability and quality of cloud services will become a major challenge. Preparing for this now will ensure that cloud computing continues to deliver on its “promise”.

From excess to scarce resources

Cloud computing was initially driven by excess computing capacity. Large web companies, such as Amazon and Google, that had to build large data center capacity for their own business, saw an opportunity to provide their excess capacity as a service to others. This has been so successful that these cloud services, such as Amazon Web Services, have become a business in themselves.

Yet, as these services become more popular, demand will tend to outstrip supply, especially as some of the enablers of cloud service adoption, such as higher speed access connections, continue to grow in capacity. Simply adding more servers and higher speed networks is effective, but costly and can undermine one of the main reasons for using cloud services, namely cost reduction.  Cloud service providers will thus face the dilemma of managing demand for scarcer computing resources while at the same time maintaining a low, or at least competitive, cost level.

In other words, how can cloud service providers meet mainstream demand cost-effectively?

Efficiently Assuring Service Availability

Cloud services come in many shapes and sizes, from private clouds to public clouds with software-, platform- and infrastructure-as-a-service. Nevertheless, all these flavors of cloud service have a common need to assure service availability and do so as efficiently and cost effectively as possible.

Many cloud services already provide service availability monitoring tools, but these are often limited to monitoring of server or service up-time. Server or service up-time is but one of the aspects of service availability that need to be addressed as cloud services are dependent on much more than just the physical or virtual server on which they reside. Increasingly, the data communication infrastructure supporting the cloud service from the provider to the consumer also needs to be assured even though this might be outside the direct control of the service provider.  

To ensure mainstream adoption of cloud services, consumers must be confident that the services that are required or the data that is hosted by cloud services is available quickly when and where they need it. Otherwise, why not continue with current approaches? Mainstream consumers are noted for being more conservative and pragmatic in their choice of solutions, so addressing this concern must be a top priority for continued expansion of cloud service adoption.

Therefore, building the infrastructure to test and monitor cloud services is essential.

Testing and monitoring cloud services

From a testing and monitoring perspective, there are a number of layers one can address:

•    The Wide Area Network (WAN) providing data communication services between the enterprise customer and the cloud service – fundamental to service assurance and testing of end-to-end service availability

•    The data center infrastructure comprising servers and data communication between servers (LAN), where service availability and uptime of this equipment is key as well as efficient use of resources to ensure service efficiency

•    The monitoring infrastructure in the data center that is the basis for service assurance which itself needs to efficient

•    The individual servers and monitoring appliances that are based on servers that must also follow efficiency and availability principles to assure overall service efficiency and service availability
 
Testing end-to-end

The first test that can be performed is testing end-to-end availability. At a basic level, this involves testing connectivity, but can also involve some specific testing relevant for cloud services, such as latency measurement. Several commercial systems exist for testing latency in a WAN environment. These are most often used by financial institutions to determine the time it takes to execute financial transactions with remote stock exchanges, but can also be used by cloud service providers to test the latency of the connection to enterprise customers.

This solution requires the installation at the enterprise of a network appliance for monitoring latency, which could also be used to test connectivity. Such an appliance could also be used for troubleshooting and SLA monitoring.

Typically the cloud service provider does not own the WAN data communication infrastructure. However, using network monitoring and analysis appliances at both the data center and the enterprise, it is possible to measure the performance of the WAN in providing the data communication service required. The choice of WAN data communication provider should also be driven by the ability of this provider to provide performance data in support of agreed SLAs. In other words, this provider should have the monitoring and analysis infrastructure in place to assure services.

From reaction to service assurance

Network monitoring and analysis of the data center infrastructure is also crucial as cloud service providers need to rely less on troubleshooting and more on service assurance strategies. In typical IT network deployments, a reactive strategy is preferred whereby issues are dealt with in a troubleshooting manner as they arise. For enterprise LAN environments, this can be acceptable in many cases, as some downtime can be tolerated. However, for cloud service providers, downtime is a disaster! If customers are not confident in the cloud service provider’s ability to assure service availability, they will be quick to find alternatives or even revert to a local installation.

A service assurance strategy involves constant monitoring of the performance of the network and services so that issues can be identified before they arise. Network and application performance monitoring tools are available from a number of vendors for precisely this purpose.

The power of virtualization

One of the technology innovations of particular use to cloud service providers is virtualization. The ability to consolidate multiple cloud services onto as few physical servers as possible provides tremendous efficiency benefits by lower cost, space and power consumption. In addition, the ability to move virtual machines supporting cloud services from one physical server to another allows efficient use of resources in matching time-of-day demand, as well as allowing fast reaction to detected performance issues.

One of the consequences of this consolidation is the need for higher speed interfaces as more data needs to be delivered to each server. This, in turn requires that the data communication infrastructure is dimensioned to provide this data, which in turn demands that the network monitoring infrastructure can keep up with the data rates without losing data. This is far from a given, so cloud service providers need to pay particular attention to the throughput performance of network monitoring and analysis appliances to ensure that they can keep up also in the future.

Within the virtualized servers themselves, there are also emerging solutions to assist in monitoring performance. Just as network and application performance monitoring appliances are available to monitor the physical infrastructure, there are now available virtualized versions of these applications for monitoring virtual applications and communication between virtual machines.

There are also virtual test applications that allow a number of virtual ports to be defined that can be used for load-testing in a cloud environment. This is extremely useful for testing whether a large number of users can access a service without having to deploy a large test network. An ideal tool for cloud service providers.

Bringing virtualization to network monitoring and analysis

While virtualization has been used to improve service efficiency, the network monitoring and analysis infrastructure is still dominated by single server implementations. In many cases, this is because the network monitoring and analysis appliance requires all the processing power it can get. However, there are opportunities to consolidate appliances, especially as servers and server CPUs increase performance on a yearly basis.

Solutions are now available to allow multiple network monitoring and analysis applications to be hosted on the same physical server. If all the applications are based on the same operating system, intelligent network adapters have the ability to ensure that data is shared between these applications, which often need to analyze the same data at the same time, but for different purposes.

However, for situations where the applications are based on different operating systems, virtualization can be used to consolidate them onto a single physical server. Demonstrations have shown that up to 32 applications can thus be consolidated using virtualization.

By pursuing opportunities for consolidation of network monitoring and analysis appliances, cloud service providers can further improve service efficiency.

Preparing for mainstream adoption

Mainstream adoption of cloud services is just around the corner and to take full advantage of this demand, cloud service providers can use the existing tools and concepts described above to assure service availability in a cost effective and efficient manner.

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

Geospatial Data Research Leverages GPUs

August 17, 2017

MapD Technologies, the GPU-accelerated database specialist, said it is working with university researchers on leveraging graphics processors to advance geospatial analytics. The San Francisco-based company is collabor Read more…

By George Leopold

Intel, NERSC and University Partners Launch New Big Data Center

August 17, 2017

A collaboration between the Department of Energy’s National Energy Research Scientific Computing Center (NERSC), Intel and five Intel Parallel Computing Centers (IPCCs) has resulted in a new Big Data Center (BDC) that Read more…

By Linda Barney

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last week the cloud giant released deeplearn.js as part of that in Read more…

By John Russell

HPE Extreme Performance Solutions

Leveraging Deep Learning for Fraud Detection

Advancements in computing technologies and the expanding use of e-commerce platforms have dramatically increased the risk of fraud for financial services companies and their customers. Read more…

Spoiler Alert: Glimpse Next Week’s Solar Eclipse Via Simulation from TACC, SDSC, and NASA

August 17, 2017

Can’t wait to see next week’s solar eclipse? You can at least catch glimpses of what scientists expect it will look like. A team from Predictive Science Inc. (PSI), based in San Diego, working with Stampede2 at the Read more…

By John Russell

Microsoft Bolsters Azure With Cloud HPC Deal

August 15, 2017

Microsoft has acquired cloud computing software vendor Cycle Computing in a move designed to bring orchestration tools along with high-end computing access capabilities to the cloud. Terms of the acquisition were not disclosed. Read more…

By George Leopold

HPE Ships Supercomputer to Space Station, Final Destination Mars

August 14, 2017

With a manned mission to Mars on the horizon, the demand for space-based supercomputing is at hand. Today HPE and NASA sent the first off-the-shelf HPC system i Read more…

By Tiffany Trader

AMD EPYC Video Takes Aim at Intel’s Broadwell

August 14, 2017

Let the benchmarking begin. Last week, AMD posted a YouTube video in which one of its EPYC-based systems outperformed a ‘comparable’ Intel Broadwell-based s Read more…

By John Russell

Deep Learning Thrives in Cancer Moonshot

August 8, 2017

The U.S. War on Cancer, certainly a worthy cause, is a collection of programs stretching back more than 40 years and abiding under many banners. The latest is t Read more…

By John Russell

IBM Raises the Bar for Distributed Deep Learning

August 8, 2017

IBM is announcing today an enhancement to its PowerAI software platform aimed at facilitating the practical scaling of AI models on today’s fastest GPUs. Scal Read more…

By Tiffany Trader

IBM Storage Breakthrough Paves Way for 330TB Tape Cartridges

August 3, 2017

IBM announced yesterday a new record for magnetic tape storage that it says will keep tape storage density on a Moore's law-like path far into the next decade. Read more…

By Tiffany Trader

AMD Stuffs a Petaflops of Machine Intelligence into 20-Node Rack

August 1, 2017

With its Radeon “Vega” Instinct datacenter GPUs and EPYC “Naples” server chips entering the market this summer, AMD has positioned itself for a two-head Read more…

By Tiffany Trader

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Nvidia’s Mammoth Volta GPU Aims High for AI, HPC

May 10, 2017

At Nvidia's GPU Technology Conference (GTC17) in San Jose, Calif., this morning, CEO Jensen Huang announced the company's much-anticipated Volta architecture a Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Just how close real-wo Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its a Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of Read more…

By Alex Woodie

Leading Solution Providers

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the cam Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

Messina Update: The US Path to Exascale in 16 Slides

April 26, 2017

Paul Messina, director of the U.S. Exascale Computing Project, provided a wide-ranging review of ECP’s evolving plans last week at the HPC User Forum. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This