Mapping the SLA Landscape for High Performance Clouds

By Dr. Ivona Brandic

February 7, 2011

Cloud computing represents the convergence of several concepts in IT, ranging from virtualization, distributed application design, grid computing, and enterprise IT management–resulting to be a promising paradigm for on demand provision of ICT infrastructures.

During the past few years significant effort has been made in the sub-fields of cloud research, including the development of various federation mechanisms, cloud security, virtualization and service management techniques.

While a wealth of work has been accomplished to suit the technological development of clouds, there has yet been very little work done in the area of the market mechanisms that support them.

As we learned in the past, however (consider the case of grid technologies) appropriate market models for virtual goods, ease of use of those markets, low thresholds for entering the market for traders and buyers, and the appropriate processes for the definition and management of virtual goods have remained challenging issues. The way these topics are addressed will decide whether cloud computing will take root as a self-sustaining state-of-the-art technology.
 
The current cloud landscape is characterized by two market mechanisms: either users can select products from one of the big players with their sets of well-defined, but rigid offerings; or they rely on off-line relationships to cloud providers with niche products.

This division is especially marked in the area of HPC given the comprehensive special requirements needed, including specific security infrastructures, compliance to legal guidelines, massive scalability or support for parallel code execution, among others. HPC thus suffers from a low number of comparable choices, thus resulting in low liquidity of current cloud markets and provider/vendor lock in.

Sufficient market liquidity is essential for dynamic and open cloud markets. Liquid markets are characterized by a high number of matches for bids and offers. With the low market liquidity traders have the high risk of not being able to trade resources, while users have the risks of not being able to find suitable products.

A crucial factor in achieving high market liquidity is the existence of standardized goods. Virtual goods, as this is the case in clouds, however, exhibit high variability in product description. That means, that very similar or almost identical goods can be described in various ways with different attributes and parameters.

As shown in Table 1 below, computing resources traded in a PaaS fashion can be described through different non-standardized attributes, e.g., CPU cores, incoming bandwidth, processor types, required storage. Thus, high variability in the description of goods results again in low market liquidity. Another important characteristic of virtual goods is that they change and evolve over time following various technological trends. For example the attribute number of cores appeared just with the introduction of multi core architectures.

Table 1: Example SLA parameters

Incoming Bandwidth >10 MBit/s
Outgoing Bandwidth >12 MBit/s
Storage >1024 GB
Availability >99%
CPU Cores >16

Based on aforementioned observations, two challenging questions have been identified:

  • How can users’ demand and traders’ offers be channeled towards standardized products, which can evolve and adapt over time and reflect users’ needs and traders’ capabilities?
  • Which mechanism do we need to achieve sufficient market liquidity, where traders have high probability to sell their products and where users have sufficient probability to buy products they require.

To counteract this problem we make use of Service Level Agreements (SLAs), which are traditionally used to establish contracts between cloud traders and buyers.

Table 1 shows a typical SLA with the parameters and according values expressing non-functional requirements for the service usage. SLA templates represent popular SLA formats containing all attributes and parameters but without any values and are usually used to channel demand and offer of a market. Private templates are utilized at the buyers and traders infrastructures and reflect the needs of the particular stakeholder in terms of SLA parameters they use to establish a contract. Typical SLA parameters used at the PaaS level are depicted in Table 1 and include availability, inbound bandwidth, outgoing bandwidth, etc. Considering the high variability of virtual goods in cloud markets, the probability is high that public templates used in marketplaces to attract buyers and sellers and private templates of cloud stakeholders do not match.

One could think that traditional approaches like semantic technologies, e.g., ontologies, can be used to channel variety of SLA templates. Also public templates, which can be downloaded and utilized within the local business / scientific applications could counter act the problem of the variety of SLA templates. However, usage of ontologies is a highly static approach where the dynamics of the changing demand / supply of the market and evolving products cannot be captured. Moreover, utilization of public SLA templates in private business processes or scientific applications is in many cases not possible since it requires changes of the local applications.

In the context of the Austrian national FoSII project (DSG group, Vienna University of Technology), we are investigating self-governing cloud Computing infrastructures necessary for the attainment of established Service Level Agreements (SLAs). Timely prevention of SLA violations requires advanced resource monitoring and knowledge management. In particular, we are developing novel techniques for mapping low-level resource metrics to high-level SLAs and bridging the gap between metrics monitored by the arbitrary monitoring tools and SLA metrics guaranteed to the user, which are usually application based.

We apply various knowledge management techniques, as for example Case Based Reasoning for the prevention of SLA violations before they occur while reducing energy consumption. In collaboration with the Seoul National University we are exploring novel models for SLA mapping to counteract the problem of heterogeneous public and private templates in cloud markets. SLA mapping approach facilitates market participants to define translations from their private templates to public SLA templates while keeping their private temples unchanged. The effects of the SLA mapping approach are twofold:

  • It increases market liquidity since slightly different private templates are channeled towards few publicly available public templates. Consequently, public templates can be frequently adapted based on the supplied, aggregated, and analyzed SLA mappings. Thus, publicly available SLA temples reflect the demand and supply of the markets and can be easily adapted.
  • By clustering supplied SLA mappings different groups of cloud buyers with similar demand can be identified. Thus, based on the information obtained from the clustering information, products for a specific group of users can be tailored. This includes also generation of product niches, which are usually neglected in traditional markets.

SLA mapping is used to bridge the gap between inconsistent parts of two SLA templates – usually between the publicly available template and the private template. For the implementation of the SLA mappings we use XSLT, a declarative XML-based language for the transformation of XML documents. Thereby the original document is not changed, rather the new document is created based on the content of the original document. Thus, if the original document is the private template of the cloud user, which differs from the public template, transformations based on the XSLT can be defined transforming the private into the public template.

Thereby we distinguish two different types of mappings:

1. Ad-hoc SLA mapping. Such mappings define translations between a parameter existing in both, private and public SLA template. We differ simple ad-hoc mapping i.e., mapping of different values for an SLA attribute or an SLA element, e.g., mapping between the names CPU Cores and Number Of Cores of an SLA parameter, and complex ad-hoc mapping, i.e., mapping between different functions for calculating a value of an SLA parameter. An example for the complex mapping would be a unit for expressing a value of an SLA parameter Price from EUR to USD, where translation have to be defined from one function for calculating price to another one. Although, simple and complex mappings appear to be rather trivial, contracts cannot be established between non-matching templates without human intervention of without the overhead of the semantic layer – which anyway has to be managed manually.

2. Future SLA mapping defines a wish for adding a new SLA parameter supported by the application to a public SLA template, or a wish for deleting an existing SLA parameter from a public template. Unlike ad-hoc mapping, future mapping cannot be applied immediately, but possibly in the future. For example a buyer could express the need for a specific SLA parameter, which does not exist yet, but can be integrated into the public templates after the observation of the supplied SLA mappings.

So far we have implemented the first prototype of the VieSLAF (Vienna Service Level Agreement) middleware for the management of SLA mappings allowing users and traders to define, manage, and apply their mappings. In our recent work we developed simulation models for the definition of market settings suitable for the evaluation of the SLA mapping approach in a real world scenario. Based on the applied SLA mappings we defined utility and cost models for users and providers. Thereafter, we applied three different methods for the evaluation of the supplied SLA mappings during a specific time span. We simulated market conditions with a number of market participants entering and leaving the market with different distributions of SLA parameters, thus, requiring different SLA mapping scenarios.

Our first observations show promising results where we achieve good high net utilities considering utilities and costs of doing SLA mappings vs. doing nothing (i.e., not achieving a match in the market). Moreover, in our simulations we applied clustering algorithms where we isolated clusters of SLA templates, which can be used as a starting point for the definition of various cloud products. Utilities achieved when applying clustering algorithms outperforms the costs for doing SLA mappings and doing nothing.

However, those are only preliminary results and the whole potential of SLA mappings is still not fully exploited. Integration into IDEs like Eclipse, where cloud stakeholders can define SLA mapping using suitable Domain Specific Languages, e.g., visual modeling languages, is an open research issue and could facilitate definition of SLA mapping by domains specialists.

The process of defining SLA mapping fully is still in the early stages; for now, these mappings are defined manually by the end users. However, with the development of the appropriate infrastructures and middleware mapping could be done in an automatic way. For example, if the attribute Price has to be translated to Euro a third party service delivering the current USD/Euro exchange rate could be included in an autonomic way facilitating not only mapping between different attributes, but also the proper generation of the according attribute values.

Aggregated and analyzed SLA maps can deliver important information about the demand and structure of the market, thus, facilitating development of open and dynamic cloud markets. Thereby, market rules and structures can be adapted on demand based on the current developments of the products and market participants.

About the Author

Dr. Ivona Brandic is Assistant Professor at the Distributed Systems Group, Information Systems Institute, Vienna University of Technology (TU Wien).

Prior to that, she was Assistant Professor at the Department of Scientific Computing, Vienna University. She received her PhD degree from Vienna University of Technology in 2007. From 2003 to 2007 she participated in the special research project AURORA (Advanced Models, Applications and Software Systems for High Performance Computing) and the European Union’s GEMSS (Grid-Enabled Medical Simulation Services) project.

She is involved in the European Union’s SCube project and she is leading the Austrian national FoSII (Foundations of Self-governing ICT Infrastructures) project funded by the Vienna Science and Technology Fund (WWTF). She is Management Committee member of the European Commission’s COST Action on Energy Efficient Large Scale Distributed Systems. From June-August 2008 she was visiting researcher at the University of Melbourne. Her interests comprise SLA and QoS management, Service-oriented architectures, autonomic computing, workflow management, and large scale distributed systems (cloud, grid, cluster, etc.).
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visitors to the Colorado Convention Center in Denver for the larg Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some cases, city managers didn’t even know existed. Speaking Read more…

By Doug Black

HPE Extreme Performance Solutions

Harness Scalable Petabyte Storage with HPE Apollo 4510 and HPE StoreEver

As a growing number of connected devices challenges IT departments to rapidly collect, manage, and store troves of data, organizations must adopt a new generation of IT to help them operate quickly and intelligently. Read more…

SC17 Student Cluster Competition Configurations: Fewer Nodes, Way More Accelerators

November 16, 2017

The final configurations for each of the SC17 “Donnybrook in Denver” Student Cluster Competition have been released. Fortunately, each team received their equipment shipments on time and undamaged, so the teams are r Read more…

By Dan Olds

SC Bids Farewell to Denver, Heads to Dallas for 30th

November 17, 2017

After a jam-packed four-day expo and intensive six-day technical program, SC17 has wrapped up another successful event that brought together nearly 13,000 visit Read more…

By Tiffany Trader

SC17 Keynote – HPC Powers SKA Efforts to Peer Deep into the Cosmos

November 17, 2017

This week’s SC17 keynote – Life, the Universe and Computing: The Story of the SKA Telescope – was a powerful pitch for the potential of Big Science projects that also showcased the foundational role of high performance computing in modern science. It was also visually stunning. Read more…

By John Russell

How Cities Use HPC at the Edge to Get Smarter

November 17, 2017

Cities are sensoring up, collecting vast troves of data that they’re running through predictive models and using the insights to solve problems that, in some Read more…

By Doug Black

Student Cluster LINPACK Record Shattered! More LINs Packed Than Ever before!

November 16, 2017

Nanyang Technological University, the pride of Singapore, utterly destroyed the Student Cluster Competition LINPACK record by posting a score of 51.77 TFlop/s a Read more…

By Dan Olds

Hyperion Market Update: ‘Decent’ Growth Led by HPE; AI Transparency a Risk Issue

November 15, 2017

The HPC market update from Hyperion Research (formerly IDC) at the annual SC conference is a business and social “must,” and this year’s presentation at S Read more…

By Doug Black

Nvidia Focuses Its Cloud Containers on HPC Applications

November 14, 2017

Having migrated its top-of-the-line datacenter GPU to the largest cloud vendors, Nvidia is touting its Volta architecture for a range of scientific computing ta Read more…

By George Leopold

HPE Launches ARM-based Apollo System for HPC, AI

November 14, 2017

HPE doubled down on its memory-driven computing vision while expanding its processor portfolio with the announcement yesterday of the company’s first ARM-base Read more…

By Doug Black

OpenACC Shines in Global Climate/Weather Codes

November 14, 2017

OpenACC, the directive-based parallel programming model used mostly for porting codes to GPUs for use on heterogeneous systems, came to SC17 touting impressive Read more…

By John Russell

US Coalesces Plans for First Exascale Supercomputer: Aurora in 2021

September 27, 2017

At the Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., yesterday (Sept. 26), it was revealed that the "Aurora" supercompute Read more…

By Tiffany Trader

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 20), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Leading Solution Providers

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

Delays, Smoke, Records & Markets – A Candid Conversation with Cray CEO Peter Ungaro

October 5, 2017

Earlier this month, Tom Tabor, publisher of HPCwire and I had a very personal conversation with Cray CEO Peter Ungaro. Cray has been on something of a Cinderell Read more…

By Tiffany Trader & Tom Tabor

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

AMD Showcases Growing Portfolio of EPYC and Radeon-based Systems at SC17

November 13, 2017

AMD’s charge back into HPC and the datacenter is on full display at SC17. Having launched the EPYC processor line in June along with its MI25 GPU the focus he Read more…

By John Russell

Intel Launches Software Tools to Ease FPGA Programming

September 5, 2017

Field Programmable Gate Arrays (FPGAs) have a reputation for being difficult to program, requiring expertise in specialty languages, like Verilog or VHDL. Easin Read more…

By Tiffany Trader

HPC Chips – A Veritable Smorgasbord?

October 10, 2017

For the first time since AMD's ill-fated launch of Bulldozer the answer to the question, 'Which CPU will be in my next HPC system?' doesn't have to be 'Whichever variety of Intel Xeon E5 they are selling when we procure'. Read more…

By Dairsie Latimer

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

  • arrow
  • Click Here for More Headlines
  • arrow
Share This