Mapping the SLA Landscape for High Performance Clouds

By Dr. Ivona Brandic

February 7, 2011

Cloud computing represents the convergence of several concepts in IT, ranging from virtualization, distributed application design, grid computing, and enterprise IT management–resulting to be a promising paradigm for on demand provision of ICT infrastructures.

During the past few years significant effort has been made in the sub-fields of cloud research, including the development of various federation mechanisms, cloud security, virtualization and service management techniques.

While a wealth of work has been accomplished to suit the technological development of clouds, there has yet been very little work done in the area of the market mechanisms that support them.

As we learned in the past, however (consider the case of grid technologies) appropriate market models for virtual goods, ease of use of those markets, low thresholds for entering the market for traders and buyers, and the appropriate processes for the definition and management of virtual goods have remained challenging issues. The way these topics are addressed will decide whether cloud computing will take root as a self-sustaining state-of-the-art technology.
 
The current cloud landscape is characterized by two market mechanisms: either users can select products from one of the big players with their sets of well-defined, but rigid offerings; or they rely on off-line relationships to cloud providers with niche products.

This division is especially marked in the area of HPC given the comprehensive special requirements needed, including specific security infrastructures, compliance to legal guidelines, massive scalability or support for parallel code execution, among others. HPC thus suffers from a low number of comparable choices, thus resulting in low liquidity of current cloud markets and provider/vendor lock in.

Sufficient market liquidity is essential for dynamic and open cloud markets. Liquid markets are characterized by a high number of matches for bids and offers. With the low market liquidity traders have the high risk of not being able to trade resources, while users have the risks of not being able to find suitable products.

A crucial factor in achieving high market liquidity is the existence of standardized goods. Virtual goods, as this is the case in clouds, however, exhibit high variability in product description. That means, that very similar or almost identical goods can be described in various ways with different attributes and parameters.

As shown in Table 1 below, computing resources traded in a PaaS fashion can be described through different non-standardized attributes, e.g., CPU cores, incoming bandwidth, processor types, required storage. Thus, high variability in the description of goods results again in low market liquidity. Another important characteristic of virtual goods is that they change and evolve over time following various technological trends. For example the attribute number of cores appeared just with the introduction of multi core architectures.

Table 1: Example SLA parameters

Incoming Bandwidth >10 MBit/s
Outgoing Bandwidth >12 MBit/s
Storage >1024 GB
Availability >99%
CPU Cores >16

Based on aforementioned observations, two challenging questions have been identified:

  • How can users’ demand and traders’ offers be channeled towards standardized products, which can evolve and adapt over time and reflect users’ needs and traders’ capabilities?
  • Which mechanism do we need to achieve sufficient market liquidity, where traders have high probability to sell their products and where users have sufficient probability to buy products they require.

To counteract this problem we make use of Service Level Agreements (SLAs), which are traditionally used to establish contracts between cloud traders and buyers.

Table 1 shows a typical SLA with the parameters and according values expressing non-functional requirements for the service usage. SLA templates represent popular SLA formats containing all attributes and parameters but without any values and are usually used to channel demand and offer of a market. Private templates are utilized at the buyers and traders infrastructures and reflect the needs of the particular stakeholder in terms of SLA parameters they use to establish a contract. Typical SLA parameters used at the PaaS level are depicted in Table 1 and include availability, inbound bandwidth, outgoing bandwidth, etc. Considering the high variability of virtual goods in cloud markets, the probability is high that public templates used in marketplaces to attract buyers and sellers and private templates of cloud stakeholders do not match.

One could think that traditional approaches like semantic technologies, e.g., ontologies, can be used to channel variety of SLA templates. Also public templates, which can be downloaded and utilized within the local business / scientific applications could counter act the problem of the variety of SLA templates. However, usage of ontologies is a highly static approach where the dynamics of the changing demand / supply of the market and evolving products cannot be captured. Moreover, utilization of public SLA templates in private business processes or scientific applications is in many cases not possible since it requires changes of the local applications.

In the context of the Austrian national FoSII project (DSG group, Vienna University of Technology), we are investigating self-governing cloud Computing infrastructures necessary for the attainment of established Service Level Agreements (SLAs). Timely prevention of SLA violations requires advanced resource monitoring and knowledge management. In particular, we are developing novel techniques for mapping low-level resource metrics to high-level SLAs and bridging the gap between metrics monitored by the arbitrary monitoring tools and SLA metrics guaranteed to the user, which are usually application based.

We apply various knowledge management techniques, as for example Case Based Reasoning for the prevention of SLA violations before they occur while reducing energy consumption. In collaboration with the Seoul National University we are exploring novel models for SLA mapping to counteract the problem of heterogeneous public and private templates in cloud markets. SLA mapping approach facilitates market participants to define translations from their private templates to public SLA templates while keeping their private temples unchanged. The effects of the SLA mapping approach are twofold:

  • It increases market liquidity since slightly different private templates are channeled towards few publicly available public templates. Consequently, public templates can be frequently adapted based on the supplied, aggregated, and analyzed SLA mappings. Thus, publicly available SLA temples reflect the demand and supply of the markets and can be easily adapted.
  • By clustering supplied SLA mappings different groups of cloud buyers with similar demand can be identified. Thus, based on the information obtained from the clustering information, products for a specific group of users can be tailored. This includes also generation of product niches, which are usually neglected in traditional markets.

SLA mapping is used to bridge the gap between inconsistent parts of two SLA templates – usually between the publicly available template and the private template. For the implementation of the SLA mappings we use XSLT, a declarative XML-based language for the transformation of XML documents. Thereby the original document is not changed, rather the new document is created based on the content of the original document. Thus, if the original document is the private template of the cloud user, which differs from the public template, transformations based on the XSLT can be defined transforming the private into the public template.

Thereby we distinguish two different types of mappings:

1. Ad-hoc SLA mapping. Such mappings define translations between a parameter existing in both, private and public SLA template. We differ simple ad-hoc mapping i.e., mapping of different values for an SLA attribute or an SLA element, e.g., mapping between the names CPU Cores and Number Of Cores of an SLA parameter, and complex ad-hoc mapping, i.e., mapping between different functions for calculating a value of an SLA parameter. An example for the complex mapping would be a unit for expressing a value of an SLA parameter Price from EUR to USD, where translation have to be defined from one function for calculating price to another one. Although, simple and complex mappings appear to be rather trivial, contracts cannot be established between non-matching templates without human intervention of without the overhead of the semantic layer – which anyway has to be managed manually.

2. Future SLA mapping defines a wish for adding a new SLA parameter supported by the application to a public SLA template, or a wish for deleting an existing SLA parameter from a public template. Unlike ad-hoc mapping, future mapping cannot be applied immediately, but possibly in the future. For example a buyer could express the need for a specific SLA parameter, which does not exist yet, but can be integrated into the public templates after the observation of the supplied SLA mappings.

So far we have implemented the first prototype of the VieSLAF (Vienna Service Level Agreement) middleware for the management of SLA mappings allowing users and traders to define, manage, and apply their mappings. In our recent work we developed simulation models for the definition of market settings suitable for the evaluation of the SLA mapping approach in a real world scenario. Based on the applied SLA mappings we defined utility and cost models for users and providers. Thereafter, we applied three different methods for the evaluation of the supplied SLA mappings during a specific time span. We simulated market conditions with a number of market participants entering and leaving the market with different distributions of SLA parameters, thus, requiring different SLA mapping scenarios.

Our first observations show promising results where we achieve good high net utilities considering utilities and costs of doing SLA mappings vs. doing nothing (i.e., not achieving a match in the market). Moreover, in our simulations we applied clustering algorithms where we isolated clusters of SLA templates, which can be used as a starting point for the definition of various cloud products. Utilities achieved when applying clustering algorithms outperforms the costs for doing SLA mappings and doing nothing.

However, those are only preliminary results and the whole potential of SLA mappings is still not fully exploited. Integration into IDEs like Eclipse, where cloud stakeholders can define SLA mapping using suitable Domain Specific Languages, e.g., visual modeling languages, is an open research issue and could facilitate definition of SLA mapping by domains specialists.

The process of defining SLA mapping fully is still in the early stages; for now, these mappings are defined manually by the end users. However, with the development of the appropriate infrastructures and middleware mapping could be done in an automatic way. For example, if the attribute Price has to be translated to Euro a third party service delivering the current USD/Euro exchange rate could be included in an autonomic way facilitating not only mapping between different attributes, but also the proper generation of the according attribute values.

Aggregated and analyzed SLA maps can deliver important information about the demand and structure of the market, thus, facilitating development of open and dynamic cloud markets. Thereby, market rules and structures can be adapted on demand based on the current developments of the products and market participants.

About the Author

Dr. Ivona Brandic is Assistant Professor at the Distributed Systems Group, Information Systems Institute, Vienna University of Technology (TU Wien).

Prior to that, she was Assistant Professor at the Department of Scientific Computing, Vienna University. She received her PhD degree from Vienna University of Technology in 2007. From 2003 to 2007 she participated in the special research project AURORA (Advanced Models, Applications and Software Systems for High Performance Computing) and the European Union’s GEMSS (Grid-Enabled Medical Simulation Services) project.

She is involved in the European Union’s SCube project and she is leading the Austrian national FoSII (Foundations of Self-governing ICT Infrastructures) project funded by the Vienna Science and Technology Fund (WWTF). She is Management Committee member of the European Commission’s COST Action on Energy Efficient Large Scale Distributed Systems. From June-August 2008 she was visiting researcher at the University of Melbourne. Her interests comprise SLA and QoS management, Service-oriented architectures, autonomic computing, workflow management, and large scale distributed systems (cloud, grid, cluster, etc.).
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 13), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue’s max capacity and doubling 2016 attendee numbers), the one Read more…

By Tiffany Trader

Machine Learning at HPC User Forum: Drilling into Specific Use Cases

September 22, 2017

The 66th HPC User Forum held September 5-7, in Milwaukee, Wisconsin, at the elegant and historic Pfister Hotel, highlighting the 1893 Victorian décor and art of “The Grand Hotel Of The West,” contrasted nicely with Read more…

By Arno Kolster

Google Cloud Makes Good on Promise to Add Nvidia P100 GPUs

September 21, 2017

Google has taken down the notice on its cloud platform website that says Nvidia Tesla P100s are “coming soon.” That's because the search giant has announced the beta launch of the high-end P100 Nvidia Tesla GPUs on t Read more…

By George Leopold

HPE Extreme Performance Solutions

HPE Prepares Customers for Success with the HPC Software Portfolio

High performance computing (HPC) software is key to harnessing the full power of HPC environments. Development and management tools enable IT departments to streamline installation and maintenance of their systems as well as create, optimize, and run their HPC applications. Read more…

Cray Wins $48M Supercomputer Contract from KISTI

September 21, 2017

It was a good day for Cray which won a $48 million contract from the Korea Institute of Science and Technology Information (KISTI) for a 128-rack CS500 cluster supercomputer. The new system, equipped with Intel Xeon Scal Read more…

By John Russell

GlobalFoundries Puts Wind in AMD’s Sails with 12nm FinFET

September 24, 2017

From its annual tech conference last week (Sept. 13), where GlobalFoundries welcomed more than 600 semiconductor professionals (reaching the Santa Clara venue Read more…

By Tiffany Trader

Machine Learning at HPC User Forum: Drilling into Specific Use Cases

September 22, 2017

The 66th HPC User Forum held September 5-7, in Milwaukee, Wisconsin, at the elegant and historic Pfister Hotel, highlighting the 1893 Victorian décor and art o Read more…

By Arno Kolster

Stanford University and UberCloud Achieve Breakthrough in Living Heart Simulations

September 21, 2017

Cardiac arrhythmia can be an undesirable and potentially lethal side effect of drugs. During this condition, the electrical activity of the heart turns chaotic, Read more…

By Wolfgang Gentzsch, UberCloud, and Francisco Sahli, Stanford University

PNNL’s Center for Advanced Tech Evaluation Seeks Wider HPC Community Ties

September 21, 2017

Two years ago the Department of Energy established the Center for Advanced Technology Evaluation (CENATE) at Pacific Northwest National Laboratory (PNNL). CENAT Read more…

By John Russell

Exascale Computing Project Names Doug Kothe as Director

September 20, 2017

The Department of Energy’s Exascale Computing Project (ECP) has named Doug Kothe as its new director effective October 1. He replaces Paul Messina, who is stepping down after two years to return to Argonne National Laboratory. Kothe is a 32-year veteran of DOE’s National Laboratory System. Read more…

Takeaways from the Milwaukee HPC User Forum

September 19, 2017

Milwaukee’s elegant Pfister Hotel hosted approximately 100 attendees for the 66th HPC User Forum (September 5-7, 2017). In the original home city of Pabst Blu Read more…

By Merle Giles

Kathy Yelick Charts the Promise and Progress of Exascale Science

September 15, 2017

On Friday, Sept. 8, Kathy Yelick of Lawrence Berkeley National Laboratory and the University of California, Berkeley, delivered the keynote address on “Breakthrough Science at the Exascale” at the ACM Europe Conference in Barcelona. In conjunction with her presentation, Yelick agreed to a short Q&A discussion with HPCwire. Read more…

By Tiffany Trader

DARPA Pledges Another $300 Million for Post-Moore’s Readiness

September 14, 2017

The Defense Advanced Research Projects Agency (DARPA) launched a giant funding effort to ensure the United States can sustain the pace of electronic innovation vital to both a flourishing economy and a secure military. Under the banner of the Electronics Resurgence Initiative (ERI), some $500-$800 million will be invested in post-Moore’s Law technologies. Read more…

By Tiffany Trader

How ‘Knights Mill’ Gets Its Deep Learning Flops

June 22, 2017

Intel, the subject of much speculation regarding the delayed, rewritten or potentially canceled “Aurora” contract (the Argonne Lab part of the CORAL “ Read more…

By Tiffany Trader

Reinders: “AVX-512 May Be a Hidden Gem” in Intel Xeon Scalable Processors

June 29, 2017

Imagine if we could use vector processing on something other than just floating point problems.  Today, GPUs and CPUs work tirelessly to accelerate algorithms Read more…

By James Reinders

NERSC Scales Scientific Deep Learning to 15 Petaflops

August 28, 2017

A collaborative effort between Intel, NERSC and Stanford has delivered the first 15-petaflops deep learning software running on HPC platforms and is, according Read more…

By Rob Farber

Oracle Layoffs Reportedly Hit SPARC and Solaris Hard

September 7, 2017

Oracle’s latest layoffs have many wondering if this is the end of the line for the SPARC processor and Solaris OS development. As reported by multiple sources Read more…

By John Russell

Six Exascale PathForward Vendors Selected; DoE Providing $258M

June 15, 2017

The much-anticipated PathForward awards for hardware R&D in support of the Exascale Computing Project were announced today with six vendors selected – AMD Read more…

By John Russell

Top500 Results: Latest List Trends and What’s in Store

June 19, 2017

Greetings from Frankfurt and the 2017 International Supercomputing Conference where the latest Top500 list has just been revealed. Although there were no major Read more…

By Tiffany Trader

IBM Clears Path to 5nm with Silicon Nanosheets

June 5, 2017

Two years since announcing the industry’s first 7nm node test chip, IBM and its research alliance partners GlobalFoundries and Samsung have developed a proces Read more…

By Tiffany Trader

Nvidia Responds to Google TPU Benchmarking

April 10, 2017

Nvidia highlights strengths of its newest GPU silicon in response to Google's report on the performance and energy advantages of its custom tensor processor. Read more…

By Tiffany Trader

Leading Solution Providers

Graphcore Readies Launch of 16nm Colossus-IPU Chip

July 20, 2017

A second $30 million funding round for U.K. AI chip developer Graphcore sets up the company to go to market with its “intelligent processing unit” (IPU) in Read more…

By Tiffany Trader

Google Releases Deeplearn.js to Further Democratize Machine Learning

August 17, 2017

Spreading the use of machine learning tools is one of the goals of Google’s PAIR (People + AI Research) initiative, which was introduced in early July. Last w Read more…

By John Russell

Russian Researchers Claim First Quantum-Safe Blockchain

May 25, 2017

The Russian Quantum Center today announced it has overcome the threat of quantum cryptography by creating the first quantum-safe blockchain, securing cryptocurrencies like Bitcoin, along with classified government communications and other sensitive digital transfers. Read more…

By Doug Black

EU Funds 20 Million Euro ARM+FPGA Exascale Project

September 7, 2017

At the Barcelona Supercomputer Centre on Wednesday (Sept. 6), 16 partners gathered to launch the EuroEXA project, which invests €20 million over three-and-a-half years into exascale-focused research and development. Led by the Horizon 2020 program, EuroEXA picks up the banner of a triad of partner projects — ExaNeSt, EcoScale and ExaNoDe — building on their work... Read more…

By Tiffany Trader

Google Debuts TPU v2 and will Add to Google Cloud

May 25, 2017

Not long after stirring attention in the deep learning/AI community by revealing the details of its Tensor Processing Unit (TPU), Google last week announced the Read more…

By John Russell

Amazon Debuts New AMD-based GPU Instances for Graphics Acceleration

September 12, 2017

Last week Amazon Web Services (AWS) streaming service, AppStream 2.0, introduced a new GPU instance called Graphics Design intended to accelerate graphics. The Read more…

By John Russell

Cray Moves to Acquire the Seagate ClusterStor Line

July 28, 2017

This week Cray announced that it is picking up Seagate's ClusterStor HPC storage array business for an undisclosed sum. "In short we're effectively transitioning the bulk of the ClusterStor product line to Cray," said CEO Peter Ungaro. Read more…

By Tiffany Trader

IBM Advances Web-based Quantum Programming

September 5, 2017

IBM Research is pairing its Jupyter-based Data Science Experience notebook environment with its cloud-based quantum computer, IBM Q, in hopes of encouraging a new class of entrepreneurial user to solve intractable problems that even exceed the capabilities of the best AI systems. Read more…

By Alex Woodie

  • arrow
  • Click Here for More Headlines
  • arrow
Share This