Mapping the SLA Landscape for High Performance Clouds

By Dr. Ivona Brandic

February 7, 2011

Cloud computing represents the convergence of several concepts in IT, ranging from virtualization, distributed application design, grid computing, and enterprise IT management–resulting to be a promising paradigm for on demand provision of ICT infrastructures.

During the past few years significant effort has been made in the sub-fields of cloud research, including the development of various federation mechanisms, cloud security, virtualization and service management techniques.

While a wealth of work has been accomplished to suit the technological development of clouds, there has yet been very little work done in the area of the market mechanisms that support them.

As we learned in the past, however (consider the case of grid technologies) appropriate market models for virtual goods, ease of use of those markets, low thresholds for entering the market for traders and buyers, and the appropriate processes for the definition and management of virtual goods have remained challenging issues. The way these topics are addressed will decide whether cloud computing will take root as a self-sustaining state-of-the-art technology.
 
The current cloud landscape is characterized by two market mechanisms: either users can select products from one of the big players with their sets of well-defined, but rigid offerings; or they rely on off-line relationships to cloud providers with niche products.

This division is especially marked in the area of HPC given the comprehensive special requirements needed, including specific security infrastructures, compliance to legal guidelines, massive scalability or support for parallel code execution, among others. HPC thus suffers from a low number of comparable choices, thus resulting in low liquidity of current cloud markets and provider/vendor lock in.

Sufficient market liquidity is essential for dynamic and open cloud markets. Liquid markets are characterized by a high number of matches for bids and offers. With the low market liquidity traders have the high risk of not being able to trade resources, while users have the risks of not being able to find suitable products.

A crucial factor in achieving high market liquidity is the existence of standardized goods. Virtual goods, as this is the case in clouds, however, exhibit high variability in product description. That means, that very similar or almost identical goods can be described in various ways with different attributes and parameters.

As shown in Table 1 below, computing resources traded in a PaaS fashion can be described through different non-standardized attributes, e.g., CPU cores, incoming bandwidth, processor types, required storage. Thus, high variability in the description of goods results again in low market liquidity. Another important characteristic of virtual goods is that they change and evolve over time following various technological trends. For example the attribute number of cores appeared just with the introduction of multi core architectures.

Table 1: Example SLA parameters

Incoming Bandwidth >10 MBit/s
Outgoing Bandwidth >12 MBit/s
Storage >1024 GB
Availability >99%
CPU Cores >16

Based on aforementioned observations, two challenging questions have been identified:

  • How can users’ demand and traders’ offers be channeled towards standardized products, which can evolve and adapt over time and reflect users’ needs and traders’ capabilities?
  • Which mechanism do we need to achieve sufficient market liquidity, where traders have high probability to sell their products and where users have sufficient probability to buy products they require.

To counteract this problem we make use of Service Level Agreements (SLAs), which are traditionally used to establish contracts between cloud traders and buyers.

Table 1 shows a typical SLA with the parameters and according values expressing non-functional requirements for the service usage. SLA templates represent popular SLA formats containing all attributes and parameters but without any values and are usually used to channel demand and offer of a market. Private templates are utilized at the buyers and traders infrastructures and reflect the needs of the particular stakeholder in terms of SLA parameters they use to establish a contract. Typical SLA parameters used at the PaaS level are depicted in Table 1 and include availability, inbound bandwidth, outgoing bandwidth, etc. Considering the high variability of virtual goods in cloud markets, the probability is high that public templates used in marketplaces to attract buyers and sellers and private templates of cloud stakeholders do not match.

One could think that traditional approaches like semantic technologies, e.g., ontologies, can be used to channel variety of SLA templates. Also public templates, which can be downloaded and utilized within the local business / scientific applications could counter act the problem of the variety of SLA templates. However, usage of ontologies is a highly static approach where the dynamics of the changing demand / supply of the market and evolving products cannot be captured. Moreover, utilization of public SLA templates in private business processes or scientific applications is in many cases not possible since it requires changes of the local applications.

In the context of the Austrian national FoSII project (DSG group, Vienna University of Technology), we are investigating self-governing cloud Computing infrastructures necessary for the attainment of established Service Level Agreements (SLAs). Timely prevention of SLA violations requires advanced resource monitoring and knowledge management. In particular, we are developing novel techniques for mapping low-level resource metrics to high-level SLAs and bridging the gap between metrics monitored by the arbitrary monitoring tools and SLA metrics guaranteed to the user, which are usually application based.

We apply various knowledge management techniques, as for example Case Based Reasoning for the prevention of SLA violations before they occur while reducing energy consumption. In collaboration with the Seoul National University we are exploring novel models for SLA mapping to counteract the problem of heterogeneous public and private templates in cloud markets. SLA mapping approach facilitates market participants to define translations from their private templates to public SLA templates while keeping their private temples unchanged. The effects of the SLA mapping approach are twofold:

  • It increases market liquidity since slightly different private templates are channeled towards few publicly available public templates. Consequently, public templates can be frequently adapted based on the supplied, aggregated, and analyzed SLA mappings. Thus, publicly available SLA temples reflect the demand and supply of the markets and can be easily adapted.
  • By clustering supplied SLA mappings different groups of cloud buyers with similar demand can be identified. Thus, based on the information obtained from the clustering information, products for a specific group of users can be tailored. This includes also generation of product niches, which are usually neglected in traditional markets.

SLA mapping is used to bridge the gap between inconsistent parts of two SLA templates – usually between the publicly available template and the private template. For the implementation of the SLA mappings we use XSLT, a declarative XML-based language for the transformation of XML documents. Thereby the original document is not changed, rather the new document is created based on the content of the original document. Thus, if the original document is the private template of the cloud user, which differs from the public template, transformations based on the XSLT can be defined transforming the private into the public template.

Thereby we distinguish two different types of mappings:

1. Ad-hoc SLA mapping. Such mappings define translations between a parameter existing in both, private and public SLA template. We differ simple ad-hoc mapping i.e., mapping of different values for an SLA attribute or an SLA element, e.g., mapping between the names CPU Cores and Number Of Cores of an SLA parameter, and complex ad-hoc mapping, i.e., mapping between different functions for calculating a value of an SLA parameter. An example for the complex mapping would be a unit for expressing a value of an SLA parameter Price from EUR to USD, where translation have to be defined from one function for calculating price to another one. Although, simple and complex mappings appear to be rather trivial, contracts cannot be established between non-matching templates without human intervention of without the overhead of the semantic layer – which anyway has to be managed manually.

2. Future SLA mapping defines a wish for adding a new SLA parameter supported by the application to a public SLA template, or a wish for deleting an existing SLA parameter from a public template. Unlike ad-hoc mapping, future mapping cannot be applied immediately, but possibly in the future. For example a buyer could express the need for a specific SLA parameter, which does not exist yet, but can be integrated into the public templates after the observation of the supplied SLA mappings.

So far we have implemented the first prototype of the VieSLAF (Vienna Service Level Agreement) middleware for the management of SLA mappings allowing users and traders to define, manage, and apply their mappings. In our recent work we developed simulation models for the definition of market settings suitable for the evaluation of the SLA mapping approach in a real world scenario. Based on the applied SLA mappings we defined utility and cost models for users and providers. Thereafter, we applied three different methods for the evaluation of the supplied SLA mappings during a specific time span. We simulated market conditions with a number of market participants entering and leaving the market with different distributions of SLA parameters, thus, requiring different SLA mapping scenarios.

Our first observations show promising results where we achieve good high net utilities considering utilities and costs of doing SLA mappings vs. doing nothing (i.e., not achieving a match in the market). Moreover, in our simulations we applied clustering algorithms where we isolated clusters of SLA templates, which can be used as a starting point for the definition of various cloud products. Utilities achieved when applying clustering algorithms outperforms the costs for doing SLA mappings and doing nothing.

However, those are only preliminary results and the whole potential of SLA mappings is still not fully exploited. Integration into IDEs like Eclipse, where cloud stakeholders can define SLA mapping using suitable Domain Specific Languages, e.g., visual modeling languages, is an open research issue and could facilitate definition of SLA mapping by domains specialists.

The process of defining SLA mapping fully is still in the early stages; for now, these mappings are defined manually by the end users. However, with the development of the appropriate infrastructures and middleware mapping could be done in an automatic way. For example, if the attribute Price has to be translated to Euro a third party service delivering the current USD/Euro exchange rate could be included in an autonomic way facilitating not only mapping between different attributes, but also the proper generation of the according attribute values.

Aggregated and analyzed SLA maps can deliver important information about the demand and structure of the market, thus, facilitating development of open and dynamic cloud markets. Thereby, market rules and structures can be adapted on demand based on the current developments of the products and market participants.

About the Author

Dr. Ivona Brandic is Assistant Professor at the Distributed Systems Group, Information Systems Institute, Vienna University of Technology (TU Wien).

Prior to that, she was Assistant Professor at the Department of Scientific Computing, Vienna University. She received her PhD degree from Vienna University of Technology in 2007. From 2003 to 2007 she participated in the special research project AURORA (Advanced Models, Applications and Software Systems for High Performance Computing) and the European Union’s GEMSS (Grid-Enabled Medical Simulation Services) project.

She is involved in the European Union’s SCube project and she is leading the Austrian national FoSII (Foundations of Self-governing ICT Infrastructures) project funded by the Vienna Science and Technology Fund (WWTF). She is Management Committee member of the European Commission’s COST Action on Energy Efficient Large Scale Distributed Systems. From June-August 2008 she was visiting researcher at the University of Melbourne. Her interests comprise SLA and QoS management, Service-oriented architectures, autonomic computing, workflow management, and large scale distributed systems (cloud, grid, cluster, etc.).
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

US Exascale Computing Update with Paul Messina

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers.  Read more…

By Tiffany Trader

Weekly Twitter Roundup (Dec. 8, 2016)

December 8, 2016

Here at HPCwire, we aim to keep the HPC community apprised of the most relevant and interesting news items that get tweeted throughout the week. Read more…

By Thomas Ayres

Qualcomm Targets Intel Datacenter Dominance with 10nm ARM-based Server Chip

December 8, 2016

Claiming no less than a reshaping of the future of Intel-dominated datacenter computing, Qualcomm Technologies, the market leader in smartphone chips, announced the forthcoming availability of what it says is the world’s first 10nm processor for servers, based on ARM Holding’s chip designs. Read more…

By Doug Black

Which Schools Produce the Top Coders in the World?

December 8, 2016

Ever wonder which universities worldwide produce the best coders? The answers may surprise you, at least as judged by the results of a competition posted yesterday on the HackerRank blog. Read more…

By John Russell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

DDN Enables 50TB/Day Trans-Pacific Data Transfer for Yahoo Japan

December 6, 2016

Transferring data from one data center to another in search of lower regional energy costs isn’t a new concept, but Yahoo Japan is putting the idea into transcontinental effect with a system that transfers 50TB of data a day from Japan to the U.S., where electricity costs a quarter of the rates in Japan. Read more…

By Doug Black

Infographic Highlights Career of Admiral Grace Murray Hopper

December 5, 2016

Dr. Grace Murray Hopper (December 9, 1906 – January 1, 1992) was an early pioneer of computer science and one of the most famous women achievers in a field dominated by men. Read more…

By Staff

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

US Exascale Computing Update with Paul Messina

December 8, 2016

Around the world, efforts are ramping up to cross the next major computing threshold with machines that are 50-100x more performant than today’s fastest number crunchers.  Read more…

By Tiffany Trader

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. The pilots, supported in part by DOE exascale funding, not only seek to do good by advancing cancer research and therapy but also to advance deep learning capabilities and infrastructure with an eye towards eventual use on exascale machines. Read more…

By John Russell

Ganthier, Turkel on the Dell EMC Road Ahead

December 5, 2016

Who is Dell EMC and why should you care? Glad you asked is Jim Ganthier’s quick response. Ganthier is SVP for validated solutions and high performance computing for the new (even bigger) technology giant Dell EMC following Dell’s acquisition of EMC in September. In this case, says Ganthier, the blending of the two companies is a 1+1 = 5 proposition. Not bad math if you can pull it off. Read more…

By John Russell

AWS Launches Massive 100 Petabyte ‘Sneakernet’

December 1, 2016

Amazon Web Services now offers a way to move data into its cloud by the truckload. Read more…

By Tiffany Trader

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Seagate-led SAGE Project Delivers Update on Exascale Goals

November 29, 2016

Roughly a year and a half after its launch, the SAGE exascale storage project led by Seagate has delivered a substantive interim report – Data Storage for Extreme Scale. Read more…

By John Russell

Nvidia Sees Bright Future for AI Supercomputing

November 23, 2016

Graphics chipmaker Nvidia made a strong showing at SC16 in Salt Lake City last week. Read more…

By Tiffany Trader

HPE-SGI to Tackle Exascale and Enterprise Targets

November 22, 2016

At first blush, and maybe second blush too, Hewlett Packard Enterprise’s (HPE) purchase of SGI seems like an unambiguous win-win. SGI’s advanced shared memory technology, its popular UV product line (Hanna), deep vertical market expertise, and services-led go-to-market capability all give HPE a leg up in its drive to remake itself. Bear in mind HPE came into existence just a year ago with the split of Hewlett-Packard. The computer landscape, including HPC, is shifting with still unclear consequences. One wonders who’s next on the deal block following Dell’s recent merger with EMC. Read more…

By John Russell

Why 2016 Is the Most Important Year in HPC in Over Two Decades

August 23, 2016

In 1994, two NASA employees connected 16 commodity workstations together using a standard Ethernet LAN and installed open-source message passing software that allowed their number-crunching scientific application to run on the whole “cluster” of machines as if it were a single entity. Read more…

By Vincent Natoli, Stone Ridge Technology

IBM Advances Against x86 with Power9

August 30, 2016

After offering OpenPower Summit attendees a limited preview in April, IBM is unveiling further details of its next-gen CPU, Power9, which the tech mainstay is counting on to regain market share ceded to rival Intel. Read more…

By Tiffany Trader

AWS Beats Azure to K80 General Availability

September 30, 2016

Amazon Web Services has seeded its cloud with Nvidia Tesla K80 GPUs to meet the growing demand for accelerated computing across an increasingly-diverse range of workloads. The P2 instance family is a welcome addition for compute- and data-focused users who were growing frustrated with the performance limitations of Amazon's G2 instances, which are backed by three-year-old Nvidia GRID K520 graphics cards. Read more…

By Tiffany Trader

Think Fast – Is Neuromorphic Computing Set to Leap Forward?

August 15, 2016

Steadily advancing neuromorphic computing technology has created high expectations for this fundamentally different approach to computing. Read more…

By John Russell

The Exascale Computing Project Awards $39.8M to 22 Projects

September 7, 2016

The Department of Energy’s Exascale Computing Project (ECP) hit an important milestone today with the announcement of its first round of funding, moving the nation closer to its goal of reaching capable exascale computing by 2023. Read more…

By Tiffany Trader

ARM Unveils Scalable Vector Extension for HPC at Hot Chips

August 22, 2016

ARM and Fujitsu today announced a scalable vector extension (SVE) to the ARMv8-A architecture intended to enhance ARM capabilities in HPC workloads. Fujitsu is the lead silicon partner in the effort (so far) and will use ARM with SVE technology in its post K computer, Japan’s next flagship supercomputer planned for the 2020 timeframe. This is an important incremental step for ARM, which seeks to push more aggressively into mainstream and HPC server markets. Read more…

By John Russell

IBM Debuts Power8 Chip with NVLink and Three New Systems

September 8, 2016

Not long after revealing more details about its next-gen Power9 chip due in 2017, IBM today rolled out three new Power8-based Linux servers and a new version of its Power8 chip featuring Nvidia’s NVLink interconnect. Read more…

By John Russell

Vectors: How the Old Became New Again in Supercomputing

September 26, 2016

Vector instructions, once a powerful performance innovation of supercomputing in the 1970s and 1980s became an obsolete technology in the 1990s. But like the mythical phoenix bird, vector instructions have arisen from the ashes. Here is the history of a technology that went from new to old then back to new. Read more…

By Lynd Stringer

Leading Solution Providers

US, China Vie for Supercomputing Supremacy

November 14, 2016

The 48th edition of the TOP500 list is fresh off the presses and while there is no new number one system, as previously teased by China, there are a number of notable entrants from the US and around the world and significant trends to report on. Read more…

By Tiffany Trader

Intel Launches Silicon Photonics Chip, Previews Next-Gen Phi for AI

August 18, 2016

At the Intel Developer Forum, held in San Francisco this week, Intel Senior Vice President and General Manager Diane Bryant announced the launch of Intel's Silicon Photonics product line and teased a brand-new Phi product, codenamed "Knights Mill," aimed at machine learning workloads. Read more…

By Tiffany Trader

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

Dell EMC Engineers Strategy to Democratize HPC

September 29, 2016

The freshly minted Dell EMC division of Dell Technologies is on a mission to take HPC mainstream with a strategy that hinges on engineered solutions, beginning with a focus on three industry verticals: manufacturing, research and life sciences. "Unlike traditional HPC where everybody bought parts, assembled parts and ran the workloads and did iterative engineering, we want folks to focus on time to innovation and let us worry about the infrastructure," said Jim Ganthier, senior vice president, validated solutions organization at Dell EMC Converged Platforms Solution Division. Read more…

By Tiffany Trader

Beyond von Neumann, Neuromorphic Computing Steadily Advances

March 21, 2016

Neuromorphic computing – brain inspired computing – has long been a tantalizing goal. The human brain does with around 20 watts what supercomputers do with megawatts. And power consumption isn’t the only difference. Fundamentally, brains ‘think differently’ than the von Neumann architecture-based computers. While neuromorphic computing progress has been intriguing, it has still not proven very practical. Read more…

By John Russell

Container App ‘Singularity’ Eases Scientific Computing

October 20, 2016

HPC container platform Singularity is just six months out from its 1.0 release but already is making inroads across the HPC research landscape. It's in use at Lawrence Berkeley National Laboratory (LBNL), where Singularity founder Gregory Kurtzer has worked in the High Performance Computing Services (HPCS) group for 16 years. Read more…

By Tiffany Trader

Micron, Intel Prepare to Launch 3D XPoint Memory

August 16, 2016

Micron Technology used last week’s Flash Memory Summit to roll out its new line of 3D XPoint memory technology jointly developed with Intel while demonstrating the technology in solid-state drives. Micron claimed its Quantx line delivers PCI Express (PCIe) SSD performance with read latencies at less than 10 microseconds and writes at less than 20 microseconds. Read more…

By George Leopold

D-Wave SC16 Update: What’s Bo Ewald Saying These Days

November 18, 2016

Tucked in a back section of the SC16 exhibit hall, quantum computing pioneer D-Wave has been talking up its new 2000-qubit processor announced in September. Forget for a moment the criticism sometimes aimed at D-Wave. This small Canadian company has sold several machines including, for example, ones to Lockheed and NASA, and has worked with Google on mapping machine learning problems to quantum computing. In July Los Alamos National Laboratory took possession of a 1000-quibit D-Wave 2X system that LANL ordered a year ago around the time of SC15. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This