Mapping the SLA Landscape for High Performance Clouds

By Dr. Ivona Brandic

February 7, 2011

Cloud computing represents the convergence of several concepts in IT, ranging from virtualization, distributed application design, grid computing, and enterprise IT management–resulting to be a promising paradigm for on demand provision of ICT infrastructures.

During the past few years significant effort has been made in the sub-fields of cloud research, including the development of various federation mechanisms, cloud security, virtualization and service management techniques.

While a wealth of work has been accomplished to suit the technological development of clouds, there has yet been very little work done in the area of the market mechanisms that support them.

As we learned in the past, however (consider the case of grid technologies) appropriate market models for virtual goods, ease of use of those markets, low thresholds for entering the market for traders and buyers, and the appropriate processes for the definition and management of virtual goods have remained challenging issues. The way these topics are addressed will decide whether cloud computing will take root as a self-sustaining state-of-the-art technology.
 
The current cloud landscape is characterized by two market mechanisms: either users can select products from one of the big players with their sets of well-defined, but rigid offerings; or they rely on off-line relationships to cloud providers with niche products.

This division is especially marked in the area of HPC given the comprehensive special requirements needed, including specific security infrastructures, compliance to legal guidelines, massive scalability or support for parallel code execution, among others. HPC thus suffers from a low number of comparable choices, thus resulting in low liquidity of current cloud markets and provider/vendor lock in.

Sufficient market liquidity is essential for dynamic and open cloud markets. Liquid markets are characterized by a high number of matches for bids and offers. With the low market liquidity traders have the high risk of not being able to trade resources, while users have the risks of not being able to find suitable products.

A crucial factor in achieving high market liquidity is the existence of standardized goods. Virtual goods, as this is the case in clouds, however, exhibit high variability in product description. That means, that very similar or almost identical goods can be described in various ways with different attributes and parameters.

As shown in Table 1 below, computing resources traded in a PaaS fashion can be described through different non-standardized attributes, e.g., CPU cores, incoming bandwidth, processor types, required storage. Thus, high variability in the description of goods results again in low market liquidity. Another important characteristic of virtual goods is that they change and evolve over time following various technological trends. For example the attribute number of cores appeared just with the introduction of multi core architectures.

Table 1: Example SLA parameters

Incoming Bandwidth >10 MBit/s
Outgoing Bandwidth >12 MBit/s
Storage >1024 GB
Availability >99%
CPU Cores >16

Based on aforementioned observations, two challenging questions have been identified:

  • How can users’ demand and traders’ offers be channeled towards standardized products, which can evolve and adapt over time and reflect users’ needs and traders’ capabilities?
  • Which mechanism do we need to achieve sufficient market liquidity, where traders have high probability to sell their products and where users have sufficient probability to buy products they require.

To counteract this problem we make use of Service Level Agreements (SLAs), which are traditionally used to establish contracts between cloud traders and buyers.

Table 1 shows a typical SLA with the parameters and according values expressing non-functional requirements for the service usage. SLA templates represent popular SLA formats containing all attributes and parameters but without any values and are usually used to channel demand and offer of a market. Private templates are utilized at the buyers and traders infrastructures and reflect the needs of the particular stakeholder in terms of SLA parameters they use to establish a contract. Typical SLA parameters used at the PaaS level are depicted in Table 1 and include availability, inbound bandwidth, outgoing bandwidth, etc. Considering the high variability of virtual goods in cloud markets, the probability is high that public templates used in marketplaces to attract buyers and sellers and private templates of cloud stakeholders do not match.

One could think that traditional approaches like semantic technologies, e.g., ontologies, can be used to channel variety of SLA templates. Also public templates, which can be downloaded and utilized within the local business / scientific applications could counter act the problem of the variety of SLA templates. However, usage of ontologies is a highly static approach where the dynamics of the changing demand / supply of the market and evolving products cannot be captured. Moreover, utilization of public SLA templates in private business processes or scientific applications is in many cases not possible since it requires changes of the local applications.

In the context of the Austrian national FoSII project (DSG group, Vienna University of Technology), we are investigating self-governing cloud Computing infrastructures necessary for the attainment of established Service Level Agreements (SLAs). Timely prevention of SLA violations requires advanced resource monitoring and knowledge management. In particular, we are developing novel techniques for mapping low-level resource metrics to high-level SLAs and bridging the gap between metrics monitored by the arbitrary monitoring tools and SLA metrics guaranteed to the user, which are usually application based.

We apply various knowledge management techniques, as for example Case Based Reasoning for the prevention of SLA violations before they occur while reducing energy consumption. In collaboration with the Seoul National University we are exploring novel models for SLA mapping to counteract the problem of heterogeneous public and private templates in cloud markets. SLA mapping approach facilitates market participants to define translations from their private templates to public SLA templates while keeping their private temples unchanged. The effects of the SLA mapping approach are twofold:

  • It increases market liquidity since slightly different private templates are channeled towards few publicly available public templates. Consequently, public templates can be frequently adapted based on the supplied, aggregated, and analyzed SLA mappings. Thus, publicly available SLA temples reflect the demand and supply of the markets and can be easily adapted.
  • By clustering supplied SLA mappings different groups of cloud buyers with similar demand can be identified. Thus, based on the information obtained from the clustering information, products for a specific group of users can be tailored. This includes also generation of product niches, which are usually neglected in traditional markets.

SLA mapping is used to bridge the gap between inconsistent parts of two SLA templates – usually between the publicly available template and the private template. For the implementation of the SLA mappings we use XSLT, a declarative XML-based language for the transformation of XML documents. Thereby the original document is not changed, rather the new document is created based on the content of the original document. Thus, if the original document is the private template of the cloud user, which differs from the public template, transformations based on the XSLT can be defined transforming the private into the public template.

Thereby we distinguish two different types of mappings:

1. Ad-hoc SLA mapping. Such mappings define translations between a parameter existing in both, private and public SLA template. We differ simple ad-hoc mapping i.e., mapping of different values for an SLA attribute or an SLA element, e.g., mapping between the names CPU Cores and Number Of Cores of an SLA parameter, and complex ad-hoc mapping, i.e., mapping between different functions for calculating a value of an SLA parameter. An example for the complex mapping would be a unit for expressing a value of an SLA parameter Price from EUR to USD, where translation have to be defined from one function for calculating price to another one. Although, simple and complex mappings appear to be rather trivial, contracts cannot be established between non-matching templates without human intervention of without the overhead of the semantic layer – which anyway has to be managed manually.

2. Future SLA mapping defines a wish for adding a new SLA parameter supported by the application to a public SLA template, or a wish for deleting an existing SLA parameter from a public template. Unlike ad-hoc mapping, future mapping cannot be applied immediately, but possibly in the future. For example a buyer could express the need for a specific SLA parameter, which does not exist yet, but can be integrated into the public templates after the observation of the supplied SLA mappings.

So far we have implemented the first prototype of the VieSLAF (Vienna Service Level Agreement) middleware for the management of SLA mappings allowing users and traders to define, manage, and apply their mappings. In our recent work we developed simulation models for the definition of market settings suitable for the evaluation of the SLA mapping approach in a real world scenario. Based on the applied SLA mappings we defined utility and cost models for users and providers. Thereafter, we applied three different methods for the evaluation of the supplied SLA mappings during a specific time span. We simulated market conditions with a number of market participants entering and leaving the market with different distributions of SLA parameters, thus, requiring different SLA mapping scenarios.

Our first observations show promising results where we achieve good high net utilities considering utilities and costs of doing SLA mappings vs. doing nothing (i.e., not achieving a match in the market). Moreover, in our simulations we applied clustering algorithms where we isolated clusters of SLA templates, which can be used as a starting point for the definition of various cloud products. Utilities achieved when applying clustering algorithms outperforms the costs for doing SLA mappings and doing nothing.

However, those are only preliminary results and the whole potential of SLA mappings is still not fully exploited. Integration into IDEs like Eclipse, where cloud stakeholders can define SLA mapping using suitable Domain Specific Languages, e.g., visual modeling languages, is an open research issue and could facilitate definition of SLA mapping by domains specialists.

The process of defining SLA mapping fully is still in the early stages; for now, these mappings are defined manually by the end users. However, with the development of the appropriate infrastructures and middleware mapping could be done in an automatic way. For example, if the attribute Price has to be translated to Euro a third party service delivering the current USD/Euro exchange rate could be included in an autonomic way facilitating not only mapping between different attributes, but also the proper generation of the according attribute values.

Aggregated and analyzed SLA maps can deliver important information about the demand and structure of the market, thus, facilitating development of open and dynamic cloud markets. Thereby, market rules and structures can be adapted on demand based on the current developments of the products and market participants.

About the Author

Dr. Ivona Brandic is Assistant Professor at the Distributed Systems Group, Information Systems Institute, Vienna University of Technology (TU Wien).

Prior to that, she was Assistant Professor at the Department of Scientific Computing, Vienna University. She received her PhD degree from Vienna University of Technology in 2007. From 2003 to 2007 she participated in the special research project AURORA (Advanced Models, Applications and Software Systems for High Performance Computing) and the European Union’s GEMSS (Grid-Enabled Medical Simulation Services) project.

She is involved in the European Union’s SCube project and she is leading the Austrian national FoSII (Foundations of Self-governing ICT Infrastructures) project funded by the Vienna Science and Technology Fund (WWTF). She is Management Committee member of the European Commission’s COST Action on Energy Efficient Large Scale Distributed Systems. From June-August 2008 she was visiting researcher at the University of Melbourne. Her interests comprise SLA and QoS management, Service-oriented architectures, autonomic computing, workflow management, and large scale distributed systems (cloud, grid, cluster, etc.).
 

Subscribe to HPCwire's Weekly Update!

Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!

AI in the News: Rao in at Intel, Ng out at Baidu, Nvidia on at Tencent Cloud

March 26, 2017

Just as AI has become the leitmotif of the advanced scale computing market, infusing much of the conversation about HPC in commercial and industrial spheres, it also is impacting high-level management changes in the industry. Read more…

By Doug Black

Scalable Informatics Ceases Operations

March 23, 2017

On the same day we reported on the uncertain future for HPC compiler company PathScale, we are sad to learn that another HPC vendor, Scalable Informatics, is closing its doors. Read more…

By Tiffany Trader

‘Strategies in Biomedical Data Science’ Advances IT-Research Synergies

March 23, 2017

“Strategies in Biomedical Data Science: Driving Force for Innovation” by Jay A. Etchings is both an introductory text and a field guide for anyone working with biomedical data. Read more…

By Tiffany Trader

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

HPE Extreme Performance Solutions

HFT Firms Turn to Co-Location to Gain Competitive Advantage

High-frequency trading (HFT) is a high-speed, high-stakes world where every millisecond matters. Finding ways to execute trades faster than the competition translates directly to greater revenue for firms, brokerages, and exchanges. Read more…

Google Launches New Machine Learning Journal

March 22, 2017

On Monday, Google announced plans to launch a new peer review journal and “ecosystem” Read more…

By John Russell

Swiss Researchers Peer Inside Chips with Improved X-Ray Imaging

March 22, 2017

Peering inside semiconductor chips using x-ray imaging isn’t new, but the technique hasn’t been especially good or easy to accomplish. Read more…

By John Russell

LANL Simulation Shows Massive Black Holes Break ‘Speed Limit’

March 21, 2017

A new computer simulation based on codes developed at Los Alamos National Laboratory (LANL) is shedding light on how supermassive black holes could have formed in the early universe contrary to most prior models which impose a limit on how fast these massive ‘objects’ can form. Read more…

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

HPC Compiler Company PathScale Seeks Life Raft

March 23, 2017

HPCwire has learned that HPC compiler company PathScale has fallen on difficult times and is asking the community for help or actively seeking a buyer for its assets. Read more…

By Tiffany Trader

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

CPU-based Visualization Positions for Exascale Supercomputing

March 16, 2017

In this contributed perspective piece, Intel’s Jim Jeffers makes the case that CPU-based visualization is now widely adopted and as such is no longer a contrarian view, but is rather an exascale requirement. Read more…

By Jim Jeffers, Principal Engineer and Engineering Leader, Intel

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

New Japanese Supercomputing Project Targets Exascale

March 14, 2017

Another Japanese supercomputing project was revealed this week, this one from emerging supercomputer maker, ExaScaler Inc., and Keio University. The partners are working on an original supercomputer design with exascale aspirations. Read more…

By Tiffany Trader

Nvidia Debuts HGX-1 for Cloud; Announces Fujitsu AI Deal

March 9, 2017

On Monday Nvidia announced a major deal with Fujitsu to help build an AI supercomputer for RIKEN using 24 DGX-1 servers. Read more…

By John Russell

HPC4Mfg Advances State-of-the-Art for American Manufacturing

March 9, 2017

Last Friday (March 3, 2017), the High Performance Computing for Manufacturing (HPC4Mfg) program held an industry engagement day workshop in San Diego, bringing together members of the US manufacturing community, national laboratories and universities to discuss the role of high-performance computing as an innovation engine for American manufacturing. Read more…

By Tiffany Trader

For IBM/OpenPOWER: Success in 2017 = (Volume) Sales

January 11, 2017

To a large degree IBM and the OpenPOWER Foundation have done what they said they would – assembling a substantial and growing ecosystem and bringing Power-based products to market, all in about three years. Read more…

By John Russell

TSUBAME3.0 Points to Future HPE Pascal-NVLink-OPA Server

February 17, 2017

Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia's DGX-1 system, IBM's "Minsky" platform and the Supermicro SuperServer (1028GQ-TXR). Read more…

By Tiffany Trader

Tokyo Tech’s TSUBAME3.0 Will Be First HPE-SGI Super

February 16, 2017

In a press event Friday afternoon local time in Japan, Tokyo Institute of Technology (Tokyo Tech) announced its plans for the TSUBAME3.0 supercomputer, which will be Japan’s “fastest AI supercomputer,” Read more…

By Tiffany Trader

IBM Wants to be “Red Hat” of Deep Learning

January 26, 2017

IBM today announced the addition of TensorFlow and Chainer deep learning frameworks to its PowerAI suite of deep learning tools, which already includes popular offerings such as Caffe, Theano, and Torch. Read more…

By John Russell

Lighting up Aurora: Behind the Scenes at the Creation of the DOE’s Upcoming 200 Petaflops Supercomputer

December 1, 2016

In April 2015, U.S. Department of Energy Undersecretary Franklin Orr announced that Intel would be the prime contractor for Aurora: Read more…

By Jan Rowell

Is Liquid Cooling Ready to Go Mainstream?

February 13, 2017

Lost in the frenzy of SC16 was a substantial rise in the number of vendors showing server oriented liquid cooling technologies. Three decades ago liquid cooling was pretty much the exclusive realm of the Cray-2 and IBM mainframe class products. That’s changing. We are now seeing an emergence of x86 class server products with exotic plumbing technology ranging from Direct-to-Chip to servers and storage completely immersed in a dielectric fluid. Read more…

By Steve Campbell

Enlisting Deep Learning in the War on Cancer

December 7, 2016

Sometime in Q2 2017 the first ‘results’ of the Joint Design of Advanced Computing Solutions for Cancer (JDACS4C) will become publicly available according to Rick Stevens. He leads one of three JDACS4C pilot projects pressing deep learning (DL) into service in the War on Cancer. Read more…

By John Russell

BioTeam’s Berman Charts 2017 HPC Trends in Life Sciences

January 4, 2017

Twenty years ago high performance computing was nearly absent from life sciences. Today it’s used throughout life sciences and biomedical research. Genomics and the data deluge from modern lab instruments are the main drivers, but so is the longer-term desire to perform predictive simulation in support of Precision Medicine (PM). There’s even a specialized life sciences supercomputer, ‘Anton’ from D.E. Shaw Research, and the Pittsburgh Supercomputing Center is standing up its second Anton 2 and actively soliciting project proposals. There’s a lot going on. Read more…

By John Russell

Leading Solution Providers

HPC Startup Advances Auto-Parallelization’s Promise

January 23, 2017

The shift from single core to multicore hardware has made finding parallelism in codes more important than ever, but that hasn’t made the task of parallel programming any easier. Read more…

By Tiffany Trader

HPC Technique Propels Deep Learning at Scale

February 21, 2017

Researchers from Baidu’s Silicon Valley AI Lab (SVAIL) have adapted a well-known HPC communication technique to boost the speed and scale of their neural network training and now they are sharing their implementation with the larger deep learning community. Read more…

By Tiffany Trader

Trump Budget Targets NIH, DOE, and EPA; No Mention of NSF

March 16, 2017

President Trump’s proposed U.S. fiscal 2018 budget issued today sharply cuts science spending while bolstering military spending as he promised during the campaign. Read more…

By John Russell

Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access

March 21, 2017

For a technology that’s usually characterized as far off and in a distant galaxy, quantum computing has been steadily picking up steam. Read more…

By John Russell

CPU Benchmarking: Haswell Versus POWER8

June 2, 2015

With OpenPOWER activity ramping up and IBM’s prominent role in the upcoming DOE machines Summit and Sierra, it’s a good time to look at how the IBM POWER CPU stacks up against the x86 Xeon Haswell CPU from Intel. Read more…

By Tiffany Trader

IDG to Be Bought by Chinese Investors; IDC to Spin Out HPC Group

January 19, 2017

US-based publishing and investment firm International Data Group, Inc. (IDG) will be acquired by a pair of Chinese investors, China Oceanwide Holdings Group Co., Ltd. Read more…

By Tiffany Trader

US Supercomputing Leaders Tackle the China Question

March 15, 2017

Joint DOE-NSA report responds to the increased global pressures impacting the competitiveness of U.S. supercomputing. Read more…

By Tiffany Trader

Intel and Trump Announce $7B for Fab 42 Targeting 7nm

February 8, 2017

In what may be an attempt by President Trump to reset his turbulent relationship with the high tech industry, he and Intel CEO Brian Krzanich today announced plans to invest more than $7 billion to complete Fab 42. Read more…

By John Russell

  • arrow
  • Click Here for More Headlines
  • arrow
Share This