Mapping the SLA Landscape for High Performance Clouds

By Dr. Ivona Brandic

February 7, 2011

Cloud computing represents the convergence of several concepts in IT, ranging from virtualization, distributed application design, grid computing, and enterprise IT management–resulting to be a promising paradigm for on demand provision of ICT infrastructures.

During the past few years significant effort has been made in the sub-fields of cloud research, including the development of various federation mechanisms, cloud security, virtualization and service management techniques.

While a wealth of work has been accomplished to suit the technological development of clouds, there has yet been very little work done in the area of the market mechanisms that support them.

As we learned in the past, however (consider the case of grid technologies) appropriate market models for virtual goods, ease of use of those markets, low thresholds for entering the market for traders and buyers, and the appropriate processes for the definition and management of virtual goods have remained challenging issues. The way these topics are addressed will decide whether cloud computing will take root as a self-sustaining state-of-the-art technology.
 
The current cloud landscape is characterized by two market mechanisms: either users can select products from one of the big players with their sets of well-defined, but rigid offerings; or they rely on off-line relationships to cloud providers with niche products.

This division is especially marked in the area of HPC given the comprehensive special requirements needed, including specific security infrastructures, compliance to legal guidelines, massive scalability or support for parallel code execution, among others. HPC thus suffers from a low number of comparable choices, thus resulting in low liquidity of current cloud markets and provider/vendor lock in.

Sufficient market liquidity is essential for dynamic and open cloud markets. Liquid markets are characterized by a high number of matches for bids and offers. With the low market liquidity traders have the high risk of not being able to trade resources, while users have the risks of not being able to find suitable products.

A crucial factor in achieving high market liquidity is the existence of standardized goods. Virtual goods, as this is the case in clouds, however, exhibit high variability in product description. That means, that very similar or almost identical goods can be described in various ways with different attributes and parameters.

As shown in Table 1 below, computing resources traded in a PaaS fashion can be described through different non-standardized attributes, e.g., CPU cores, incoming bandwidth, processor types, required storage. Thus, high variability in the description of goods results again in low market liquidity. Another important characteristic of virtual goods is that they change and evolve over time following various technological trends. For example the attribute number of cores appeared just with the introduction of multi core architectures.

Table 1: Example SLA parameters

Incoming Bandwidth >10 MBit/s
Outgoing Bandwidth >12 MBit/s
Storage >1024 GB
Availability >99%
CPU Cores >16

Based on aforementioned observations, two challenging questions have been identified:

  • How can users’ demand and traders’ offers be channeled towards standardized products, which can evolve and adapt over time and reflect users’ needs and traders’ capabilities?
  • Which mechanism do we need to achieve sufficient market liquidity, where traders have high probability to sell their products and where users have sufficient probability to buy products they require.

To counteract this problem we make use of Service Level Agreements (SLAs), which are traditionally used to establish contracts between cloud traders and buyers.

Table 1 shows a typical SLA with the parameters and according values expressing non-functional requirements for the service usage. SLA templates represent popular SLA formats containing all attributes and parameters but without any values and are usually used to channel demand and offer of a market. Private templates are utilized at the buyers and traders infrastructures and reflect the needs of the particular stakeholder in terms of SLA parameters they use to establish a contract. Typical SLA parameters used at the PaaS level are depicted in Table 1 and include availability, inbound bandwidth, outgoing bandwidth, etc. Considering the high variability of virtual goods in cloud markets, the probability is high that public templates used in marketplaces to attract buyers and sellers and private templates of cloud stakeholders do not match.

One could think that traditional approaches like semantic technologies, e.g., ontologies, can be used to channel variety of SLA templates. Also public templates, which can be downloaded and utilized within the local business / scientific applications could counter act the problem of the variety of SLA templates. However, usage of ontologies is a highly static approach where the dynamics of the changing demand / supply of the market and evolving products cannot be captured. Moreover, utilization of public SLA templates in private business processes or scientific applications is in many cases not possible since it requires changes of the local applications.

In the context of the Austrian national FoSII project (DSG group, Vienna University of Technology), we are investigating self-governing cloud Computing infrastructures necessary for the attainment of established Service Level Agreements (SLAs). Timely prevention of SLA violations requires advanced resource monitoring and knowledge management. In particular, we are developing novel techniques for mapping low-level resource metrics to high-level SLAs and bridging the gap between metrics monitored by the arbitrary monitoring tools and SLA metrics guaranteed to the user, which are usually application based.

We apply various knowledge management techniques, as for example Case Based Reasoning for the prevention of SLA violations before they occur while reducing energy consumption. In collaboration with the Seoul National University we are exploring novel models for SLA mapping to counteract the problem of heterogeneous public and private templates in cloud markets. SLA mapping approach facilitates market participants to define translations from their private templates to public SLA templates while keeping their private temples unchanged. The effects of the SLA mapping approach are twofold:

  • It increases market liquidity since slightly different private templates are channeled towards few publicly available public templates. Consequently, public templates can be frequently adapted based on the supplied, aggregated, and analyzed SLA mappings. Thus, publicly available SLA temples reflect the demand and supply of the markets and can be easily adapted.
  • By clustering supplied SLA mappings different groups of cloud buyers with similar demand can be identified. Thus, based on the information obtained from the clustering information, products for a specific group of users can be tailored. This includes also generation of product niches, which are usually neglected in traditional markets.

SLA mapping is used to bridge the gap between inconsistent parts of two SLA templates – usually between the publicly available template and the private template. For the implementation of the SLA mappings we use XSLT, a declarative XML-based language for the transformation of XML documents. Thereby the original document is not changed, rather the new document is created based on the content of the original document. Thus, if the original document is the private template of the cloud user, which differs from the public template, transformations based on the XSLT can be defined transforming the private into the public template.

Thereby we distinguish two different types of mappings:

1. Ad-hoc SLA mapping. Such mappings define translations between a parameter existing in both, private and public SLA template. We differ simple ad-hoc mapping i.e., mapping of different values for an SLA attribute or an SLA element, e.g., mapping between the names CPU Cores and Number Of Cores of an SLA parameter, and complex ad-hoc mapping, i.e., mapping between different functions for calculating a value of an SLA parameter. An example for the complex mapping would be a unit for expressing a value of an SLA parameter Price from EUR to USD, where translation have to be defined from one function for calculating price to another one. Although, simple and complex mappings appear to be rather trivial, contracts cannot be established between non-matching templates without human intervention of without the overhead of the semantic layer – which anyway has to be managed manually.

2. Future SLA mapping defines a wish for adding a new SLA parameter supported by the application to a public SLA template, or a wish for deleting an existing SLA parameter from a public template. Unlike ad-hoc mapping, future mapping cannot be applied immediately, but possibly in the future. For example a buyer could express the need for a specific SLA parameter, which does not exist yet, but can be integrated into the public templates after the observation of the supplied SLA mappings.

So far we have implemented the first prototype of the VieSLAF (Vienna Service Level Agreement) middleware for the management of SLA mappings allowing users and traders to define, manage, and apply their mappings. In our recent work we developed simulation models for the definition of market settings suitable for the evaluation of the SLA mapping approach in a real world scenario. Based on the applied SLA mappings we defined utility and cost models for users and providers. Thereafter, we applied three different methods for the evaluation of the supplied SLA mappings during a specific time span. We simulated market conditions with a number of market participants entering and leaving the market with different distributions of SLA parameters, thus, requiring different SLA mapping scenarios.

Our first observations show promising results where we achieve good high net utilities considering utilities and costs of doing SLA mappings vs. doing nothing (i.e., not achieving a match in the market). Moreover, in our simulations we applied clustering algorithms where we isolated clusters of SLA templates, which can be used as a starting point for the definition of various cloud products. Utilities achieved when applying clustering algorithms outperforms the costs for doing SLA mappings and doing nothing.

However, those are only preliminary results and the whole potential of SLA mappings is still not fully exploited. Integration into IDEs like Eclipse, where cloud stakeholders can define SLA mapping using suitable Domain Specific Languages, e.g., visual modeling languages, is an open research issue and could facilitate definition of SLA mapping by domains specialists.

The process of defining SLA mapping fully is still in the early stages; for now, these mappings are defined manually by the end users. However, with the development of the appropriate infrastructures and middleware mapping could be done in an automatic way. For example, if the attribute Price has to be translated to Euro a third party service delivering the current USD/Euro exchange rate could be included in an autonomic way facilitating not only mapping between different attributes, but also the proper generation of the according attribute values.

Aggregated and analyzed SLA maps can deliver important information about the demand and structure of the market, thus, facilitating development of open and dynamic cloud markets. Thereby, market rules and structures can be adapted on demand based on the current developments of the products and market participants.

About the Author

Dr. Ivona Brandic is Assistant Professor at the Distributed Systems Group, Information Systems Institute, Vienna University of Technology (TU Wien).

Prior to that, she was Assistant Professor at the Department of Scientific Computing, Vienna University. She received her PhD degree from Vienna University of Technology in 2007. From 2003 to 2007 she participated in the special research project AURORA (Advanced Models, Applications and Software Systems for High Performance Computing) and the European Union’s GEMSS (Grid-Enabled Medical Simulation Services) project.

She is involved in the European Union’s SCube project and she is leading the Austrian national FoSII (Foundations of Self-governing ICT Infrastructures) project funded by the Vienna Science and Technology Fund (WWTF). She is Management Committee member of the European Commission’s COST Action on Energy Efficient Large Scale Distributed Systems. From June-August 2008 she was visiting researcher at the University of Melbourne. Her interests comprise SLA and QoS management, Service-oriented architectures, autonomic computing, workflow management, and large scale distributed systems (cloud, grid, cluster, etc.).
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

IBM, Nvidia, Stone Ridge Claim Gas & Oil Simulation Record

April 25, 2017

IBM, Nvidia, and Stone Ridge Technology today reported setting the performance record for a “billion cell” oil and gas reservoir simulation. Read more…

By John Russell

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

Musk’s Latest Startup Eyes Brain-Computer Links

April 21, 2017

Elon Musk, the auto and space entrepreneur and severe critic of artificial intelligence, is forming a new venture that reportedly will seek to develop an interface between the human brain and computers. Read more…

By George Leopold

HPE Extreme Performance Solutions

Remote Visualization Optimizing Life Sciences Operations and Care Delivery

As patients continually demand a better quality of care and increasingly complex workloads challenge healthcare organizations to innovate, investing in the right technologies is key to ensuring growth and success. Read more…

MIT Mathematician Spins Up 220,000-Core Google Compute Cluster

April 21, 2017

On Thursday, Google announced that MIT math professor and computational number theorist Andrew V. Sutherland had set a record for the largest Google Compute Engine (GCE) job. Sutherland ran the massive mathematics workload on 220,000 GCE cores using preemptible virtual machine instances. Read more…

By Tiffany Trader

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Nvidia P100 Shows 1.3-2.3x Speedup Over K80 GPU on Financial Apps

April 20, 2017

When it comes to the true performance of the latest silicon, every end user knows that the best processor is the one that works best for their application. Read more…

By Tiffany Trader

Quantum Adds Global Smarts to StorNext File System

April 20, 2017

Companies that use Quantum’s StorNext platform to store massive amounts of data this week got a glimpse of new storage capabilities that should make it easier to access their data horde from anywhere in the world. Read more…

By Alex Woodie

ASC17 Makes Splash at Wuxi Supercomputing Center

April 24, 2017

A record-breaking twenty student teams plus scores of company representatives, media professionals, staff and student volunteers transformed a formerly empty hall inside the Wuxi Supercomputing Center into a bustling hub of HPC activity, kicking off day one of 2017 Asia Student Supercomputer Challenge (ASC17). Read more…

By Tiffany Trader

Groq This: New AI Chips to Give GPUs a Run for Deep Learning Money

April 24, 2017

CPUs and GPUs, move over. Thanks to recent revelations surrounding Google’s new Tensor Processing Unit (TPU), the computing world appears to be on the cusp of a new generation of chips designed specifically for deep learning workloads. Read more…

By Alex Woodie

NERSC Cori Shows the World How Many-Cores for the Masses Works

April 21, 2017

As its mission, the high performance computing center for the U.S. Department of Energy Office of Science, NERSC (the National Energy Research Supercomputer Center), supports a broad spectrum of forefront scientific research across diverse areas that includes climate, material science, chemistry, fusion energy, high-energy physics and many others. Read more…

By Rob Farber

Hyperion (IDC) Paints a Bullish Picture of HPC Future

April 20, 2017

Hyperion Research – formerly IDC’s HPC group – yesterday painted a fascinating and complicated portrait of the HPC community’s health and prospects at the HPC User Forum held in Albuquerque, NM. HPC sales are up and growing ($22 billion, all HPC segments, 2016). Read more…

By John Russell

Knights Landing Processor with Omni-Path Makes Cloud Debut

April 18, 2017

HPC cloud specialist Rescale is partnering with Intel and HPC resource provider R Systems to offer first-ever cloud access to Xeon Phi "Knights Landing" processors. The infrastructure is based on the 68-core Intel Knights Landing processor with integrated Omni-Path fabric (the 7250F Xeon Phi). Read more…

By Tiffany Trader

CERN openlab Explores New CPU/FPGA Processing Solutions

April 14, 2017

Through a CERN openlab project known as the ‘High-Throughput Computing Collaboration,’ researchers are investigating the use of various Intel technologies in data filtering and data acquisition systems. Read more…

By Linda Barney

DOE Supercomputer Achieves Record 45-Qubit Quantum Simulation

April 13, 2017

In order to simulate larger and larger quantum systems and usher in an age of “quantum supremacy,” researchers are stretching the limits of today’s most advanced supercomputers. Read more…

By Tiffany Trader

Penguin Takes a Run at the Big Cloud Providers

April 12, 2017

HPC specialist Penguin Computing recently re-ran benchmarks from a study of its larger brethren and says the results show its ‘public cloud’ – Penguin on Demand (POD) – is among the leaders in cost and performance. Read more…

By John Russell

Google Pulls Back the Covers on Its First Machine Learning Chip

April 6, 2017

This week Google released a report detailing the design and performance characteristics of the Tensor Processing Unit (TPU), its custom ASIC for the inference phase of neural networks (NN). Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Leading Solution Providers

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

Facebook Open Sources Caffe2; Nvidia, Intel Rush to Optimize

April 18, 2017

From its F8 developer conference in San Jose, Calif., today, Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. Caffe2 is the successor to Caffe, the deep learning framework developed by Berkeley AI Research and community contributors. Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This