Category talk:2011-O&C

From Soma-notes
Jump to navigation Jump to search

Papers

Observability

  • How do we define 'public' action? How do we monitor 'public' action without monitoring every action?
  • How can you make sure your agent is acting according to your instructions?
  • How can we ensure that information we receive through a third-party is legitimate?
  • What CAN be observed?

Contract Monitoring

Contract Monitoring in Agent-Based Systems: Case Study from Lecture Notes in Computer Science by Jiří Hodík, Jiří Vokřínek and Michal Jakob, 2009

Abstract

Monitoring of fulfilment of obligations defined by electronic contracts in distributed domains is presented in this paper. A two-level model of contract-based systems and the types of observations needed for contract monitoring are introduced. The observations (inter-agent communication and agents’ actions) are collected and processed by the contract observation and analysis pipeline. The presented approach has been utilized in a multi-agent system for electronic contracting in a modular certification testing domain.

Summary

Andrew

Monitoring Service Contracts

An Agent-Based Framework for Monitoring Service Contracts from Lecture Notes in Computer Science by Helmut Kneer, Henrik Stormer, Harald Häuschen and Burkhard Stiller, 2002

Abstract

Within the past few years, the variety of real-time multimedia streaming services on the Internet has grown steadily. Performance of streaming services is very sensitive to traffic congestion and results very often in poor service quality on today’s best effort Internet. Reasons include the lack of any traffic prioritization mechanisms on the network level and its dependence on the cooperation of several Internet Service Providers and their reliable transmission of data packets. Therefore, service differentiation and its reliable delivery must be enforced on a business level through the introduction of service contracts between service providers and their customers. However, compliance with such service contracts is the crucial point that decides about successful improvement of the service delivery process. For that reason, an agent-based monitoring framework has been developed and introduced enabling the use of mobile agents to monitor compliance with contractual agreements between service providers and service customers. This framework describes the setup and the functionality of different kinds of mobile agents that allow monitoring of service contracts across domains of multiple service providers.

Summary

Andrew

Contracts

  • What can or can't be contracted?
  • How can you quantify abstract resources?
  • How can two or more parties agree with a minimum of intervention?

Some forms of contracts exist in the form of Service Level Agreements, and there have been efforts made to automate this process:

AURIC

AURIC: A Scalable and Highly Reusable SLA Compliance Auditing Framework from Lecture Notes in Computer Science, by Hasan and Burkhard Stiller, 2007.

Abstract

Service Level Agreements (SLA) are needed to allow business interactions to rely on Internet services. Service Level Objectives (SLO) specify the committed performance level of a service. Thus, SLA compliance auditing aims at verifying these commitments. Since SLOs for various application services and end-to-end performance definitions vary largely, automated auditing of SLA compliances poses the challenge to an auditing framework. Moreover, end-to-end performance data are potentially large for a provider with many customers. Therefore, this paper presents a scalable and highly reusable auditing framework and a prototype, termed AURIC (Auditing Framework for Internet Services), whose components can be distributed across different domains.

Summary

TJ

Bandwidth

SLA-Driven Flexible Bandwidth Reservation Negotiation Schemes for QoS Aware IP Networks from Lecture Notes in Computer Science by Gerard Parr and Alan Marshall, 2004.

Abstract

We present a generic Service Level Agreement (SLA)-driven service provisioning architecture, which enables dynamic and flexible bandwidth reservation schemes on a per- user or a per-application basis. Various session level SLA negotiation schemes involving bandwidth allocation, service start time and service duration parameters are introduced and analysed. The results show that these negotiation schemes can be utilised for the benefits of both end user and network provide such as getting the highest individual SLA optimisation in terms of Quality of Service (QoS) and price. A prototype based on an industrial agent platform has also been built to demonstrate the negotiation scenario and this is presented and discussed.

Summary

Claimed by Scott

Dynamic Adaptation

Context-Driven Autonomic Adaptation of SLA from Lecture notes in Computer Science, by authors Caroline Herssens, Stéphane Faulkner and Ivan Jureta, 2008.

Abstract

Service Level Agreements (SLAs) are used in Service-Oriented Computing to define the obligations of the parties involved in a transaction. SLAs define the service users’ Quality of Service (QoS) requirements that the service provider should satisfy. Requirements defined once may not be satisfiable when the context of the web services changes (e.g., when requirements or resource availability changes). Changes in the context can make SLAs obsolete, making SLA revision necessary. We propose a method to autonomously monitor the services’ context, and adapt SLAs to avoid obsolescence thereof.

Summary

TJ

Heuristics for Enforcing Service Level Agreements

Heuristics for Enforcing Service Level Agreements in a Public Computing Utility A masters thesis paper by Balasubramaneyam Maniymaran.

Abstract

With the increasing popularity of consumer and research oriented wide-area applications,there arises a need for a robust and efficient wide-area resource management system. Even though there exists number of systems for wide area resource management, they fail to couple the QoS management with cost management, which is the key issue in pushing such a system to be commercially successful. Further, the lack of IT skills within the companies arouses the need of decoupling service management from the underlying complex wide-area resource management. A public computing utility (PCU) addresses both these issues, and, in addition, it creates a market place for the selling idling computing resources.

This work proposes a PCU model addressing the above mentioned issues and develops heuristics to enforce QoS in that model. A new concept called virtual clusters (VCs) is introduced as semi-dynamic, service specific resource partitions of a PCU, optimizing cost, QoS, and resource utilization. This thesis describes the methodology of VC creation, analyses the formulation of a VC creation into an optimization problem, and develops solution heuristics. The concept of VC is supported by two other concepts introduced here namely anchor point (AP) and overload partition (OLP). The concept of AP is used to represent the demand distribution in a network that assists the problem formulation of the VC creation and SLA management. The concept of overload partition is used to handle the demand spikes in a VC.

In a PCU, the VC management is implemented in two phases: the first is an off-line phase of creating a VC that selects the appropriate resources and allocates them for the particular service; and the second phase employs on-line scheduling heuristic to distribute the jobs/requests from the APs among the VC nodes to achieve load balancing. A detailed simulation study is conducted to analyze the performance of different VC configurations for different load conditions and scheduling schemes and this performance is compared with a fully dynamic resource allocation scheme called Service Grid. The results verify the novelty of the VC concept.

Summary

One key concept that we should take from this paper is the way they decided how to allocate the resources. Here is a brief but excellent point to consider:

  • In a public computing utility (PCU), the virtyal cluster (VC) management is implemented in two phases: the first is an off-line phase of creating a VC that selects the appropriate resources and allocates them for the particular service; and the second phase employs on-line scheduling heuristic to distribute the jobs/requests from the anchor points (AP) among the VC nodes to achieve load balancing. A detailed simulation study is conducted to analyze the performance of different VC configurations for different load conditions and scheduling schemes and this performance is compared with a fully dynamic resource allocation scheme called Service Grid. The results verify the novelty of the VC concept.

KEY CONCEPTS

The key features of the PCU Model are:

  • an ISP like service structure
  • proposing the resource profiling scheme for resource registration
  • addressing scalability by developing PCU structure made up of domains
  • incorporating peering technology for inter-domain information dissemination
  • SLA based service instantiation and monitoring

The key concepts of the VCs idea in this paper are:

  • it mathematically formulates the trade-off between achieving the best QoS and reducing the system cost, making it best suitable for commercial infrastructures
  • even though multiple services can occupy a single resource and the service–resource attachments can change with time, a virtualized static logical resource set exposed to the service origin (SO) hides the complexity
  • being a semi-dynamic scheme, a VC can reshape itself matching the varying demand pattern, at the same time the static virtualization to the SO simplifying the service management
  • the optimization based VC creation results in better resource utilization

The key concept to anchor points:

  • By providing a representation of demand distribution in a network, the concept of anchor point enables a client-centric resource allocation for widearea services.

The key attributes of Overload Partitions:

  • they are selected via an optimization process and they are shared among multiple services.
  • Provides a cost effective, but still QoS obeying solution to handle demand spikes in the network

Service Level Agreement in Cloud Computing

SLAs in Cloud Computing A paper written by Pankesh Patel, Ajith Ranabahu, Amit Sheth.

Abstract

Cloud computing that provides cheap and pay-as-you-go computing resources is rapidly gaining momentum as an alternative to traditional IT Infrastructure. As more and more consumers delegate their tasks to cloud providers, Service Level Agreements(SLA) between consumers and providers emerge as a key aspect. Due to the dynamic nature of the cloud, continuous monitoring on Quality of Service (QoS)attributes is necessary to enforce SLAs. Also numerous other factors such as trust (on the cloud provider) come into consideration, particularly for enterprise customers that may outsource its critical data. This complex nature of the cloud landscape warrants a sophisticated means of managing SLAs. This paper proposes a mechanism for managing SLAs in a cloud computing environment using the Web Service Level Agreement(WSLA) framework, developed for SLA monitoring and SLA enforcement in a Service Oriented Architecture (SOA). We use the third party support feature of WSLA to delegate monitoring and enforcement tasks to other entities in order to solve the trust issues. We also present a real world use case to validate our proposal.

Summary

Claimed by Scott

Service Level Agreements on IP Networks

By Dinesh C. Verma, IBM T. J Watson Research center http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1323286&tag=1

Abstract

Abstract: This paper provides an overview of service-level agreements in IP networks. It looks at the typical components of a service-level agreement, and identifies three common approaches that are used to satisfy service level agreements in IP networks. The implications of using the approaches in the context of a network service provider, a hosting service provider, and an enterprise are examined. While most providers currently offer a static insurance approach towards supporting service level agreements, the schemes that can lead to more dynamic approaches are identified.

Summary

(HS) This paper starts off by talking about different components of a service level agreement. These components include: 1) A description of the nature of service to be provided 2) The expected performance level of the service, specifically its reliability and responsiveness 3) The time-frame for response and problem resolution 4) The process for monitoring and reporting the service level 5) The consequences for the service provider not meeting its obligations 6) Escape clauses and constraints.

Then they give three examples of Service level agreements on IP Networks: 1) Network Connectivity Services 2) Hosting Services 3) Integrated services

And for each of the above three, they suggest some availability, performance and reliability clauses. I think that three notions of 'availability, reliability and performance' could be three parameters that the scheme we are designing should have for each contract.

After this they discuss three different approaches to support SLAs 1) Insurance Approach 2) Provisioning Approach 3) Adaptive Approach

Trustworthiness of New Contracts

Determining the Trustworthiness of New Electronic Contracts from Lecture Notes in Computer Science by Paul Groth, Simon Miles, Sanjay Modgil, Nir Oren, Michael Luck and Yolanda Gil, 2009.

Abstract

Expressing contractual agreements electronically potentially allows agents to automatically perform functions surrounding contract use: establishment, fulfilment, renegotiation etc. For such automation to be used for real business concerns, there needs to be a high level of trust in the agent-based system. While there has been much research on simulating trust between agents, there are areas where such trust is harder to establish. In particular, contract proposals may come from parties that an agent has had no prior interaction with and, in competitive business-to-business environments, little reputation information may be available. In human practice, trust in a proposed contract is determined in part from the content of the proposal itself, and the similarity of the content to that of prior contracts, executed to varying degrees of success. In this paper, we argue that such analysis is also appropriate in automated systems, and to provide it we need systems to record salient details of prior contract use and algorithms for assessing proposals on their content. We use provenance technology to provide the former and detail algorithms for measuring contract success and similarity for the latter, applying them to an aerospace case study.

Summary

Imagine this as a table:

  • Type
    • Whether this is an obligation or permission. A prohibition is modelled as an obligation not to do something, i.e. with a negative normative condition below
  • Target
    • The contract party obliged, prohibited or permitted by the clause.
  • Activating Condition
    • The circumstances under which the clause has force, parameterized by the variables specific to each instance.
  • Normative Condition
    • The circumstances under which the obligation is not being violated or the permission is being taken advantage of, parameterized by the variable specific to each instance. Therefore, for an obligation, the target must maintain the normative condition so as not to be in violation of the contract.
  • Expiration Condition
    • The circumstances under which the clause no longer has force, parameterized by the variables specific to each instance.

This paper provides a nice, straight-forward definition of what a contract is, and provides the above schema for a contract.

Web Privacy with P3P

http://www.oreilly.de/catalog/webprivp3p/

This book talks about P3P and how companies and web developers can comply with p3p. Also check http://www.w3.org/P3P/

Summary

Hadi


Basically users install this plugin/extension on their browsers. This extension has some XML template for websites to specify their privacy policies. Users on the other hand, on their browser, fill out a preference form. When a user visits a website that has this XML form on their side, then it reads it and informs the user of how compatible/in what area the privacy policy of that website with respect to the preferences given by the user. The language they use to exchange (for the XML template) is called APPEL. It has been developed by the World Wide Web Consortium (W3C) and officially recommended on April 16, 2002. (from Wikipedia, http://en.wikipedia.org/wiki/P3p)

Hence this is not related to what we want to do, given that we narrowed observability to only observability in contracts.

Gossiping in Distributed Systems

[1] Paper: Gossiping in Distributed Systems, Anne-Marie Kermarrec, Maarten van Steen, ACM Sigops 2007

Abstract

Gossip-based algorithms were first introduced for reliably disseminating data in large-scale distributed systems. However, their simplicity, robustness, and flexibility make them attractive for more than just pure data dissemination alone. In particular, gossiping has been applied to data aggregation, overlay maintenance, and resource allocation. Gossiping applications more or less fit the same framework, with often subtle differences in algorithmic details determining divergent emergent behaviour. This divergence is often difficult to understand, as formal methods have yet to be developed that can capture the full design space of gossiping solutions. In this paper, we present a brief introduction to the field of gossiping in distributed systems, by providing a simple framework and using that framework to describe solutions for various application domains.

Summary

Hadi

This paper talks about different applications of gossiping and different approaches. Of the most important applications are data dissemination and monitoring services (such as failure detection). Nearly all gossip style algorithms follow the following framework:

a) Peer selection: This is the process of selecting a list of peers, either uniformly at random, or based on some ranking criteria (e.g. proximity, need etc.) to send some data to.

b) Data Exchanged: Selecting information to pass on the peers selected.

c) Data Processing: Processing a data received.

"Each peer is equipped with a cache, consisting of references to other peers in the system." This cache could also store information about other peers. In our project, this could be the service every other peer provides and the reputation that the peers have.

The paper is then divided into a few categories:

1) Dissemination

2) Peer Sampling

3) Topology Construction

4) Resource Management

In each, they update the framework thats mentioned above.

1) Data Dissemination: "Traditionally, gossip-based solutions have been used for data dissemination purposes. A standard approach toward dissemination is to simply let peers forward messages to each other [1]". The framework for this section is as follows:

Peer Selection: Each peer selects a list of peers to send information to.

Data Exchanged: Some message is selected and sent.

Data Processing: Receiving peer processes the data [1]

2) Peer Sampling:

Peer sampling assumes that the cost and latency of contacting each peer is the same. However, realistically this is not the case. In our system, we need to take into account cost/number of paths etc. as all other peers of a peer are not located next to a given peer.

3) Topology Construction:

 Here they mention that each node/peer only maintains a partial view of the entire system for practical reasons. 

4) Resource Management

 In this section, they mention the other use of gossiping, which is for resource management and monitoring such as failure detection. In this application, messages exchanged are about status information, such as "Are you alive?" or "I am alive" messages. These messages could be in the form of heartbeats. 
In resource management, it could be used in resource allocation. They give an example of "a gossip-based approach to estimate which slice of a collection a node belongs has been proposed in."

Increasing Observability

Like we discussed on Thursday, the real question when looking at observability is whether an action can be viewed, and who can view it. In the real world, you have a chance of being observed no matter what you do; the Internet, on the other hand, reduces this observability and instead offers a modicum of anonymity.

As the possibility of being observed increases, behavior adjusts to encourage the positive reputation of the actor or to conform with laws and regulations. This is the main benefit we wish to obtain by increasing the observability of digital actions. While omnipresent observation is possible on a computer network, in terms of observing contracts it might be more efficient to impose the possibility of being observed.


A Possible System for Increasing Observability of Contracts and Actions?

In class on Thursday, Scott brought up the idea of tracking a contract by making a minimal set of details available to all (i.e., everyone knows the parties involved in the contract, and whether the contract was fulfilled). Taking this a little further, our group considered the existence of an anonymous, distributed quorum of observers.

This quorum would, upon the creation of a contract, be given a summary of the contract (for example, Company A has agreed to cache data for Company B on a given day, while Company B will reciprocate the following day). Over the term of the contract, the individual systems in the quorum would test the contract to see if the terms had been met. At the end of the contract period, the systems would provide a "vote" declaring whether they witnessed the contract being fulfilled.

This system could also be extended to monitor general actions. Consider again this set of observers, however, now they connect at random to various websites, and take a snapshot of all connections to it. At any given time, no other user knows which system the observers will be monitoring. In other words, the observers are analogous to police patrols, albeit with no set patrol route.