<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tkomal</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Tkomal"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Tkomal"/>
	<updated>2026-04-22T11:05:23Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category:2011-O%26C&amp;diff=9012</id>
		<title>Category:2011-O&amp;C</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category:2011-O%26C&amp;diff=9012"/>
		<updated>2011-03-31T15:17:26Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Please note that the majority of our efforts are contained on the &amp;quot;Discussion&amp;quot; page.&lt;br /&gt;
&lt;br /&gt;
==****Changes to be viewed (delete once acknowledged by group)****==&lt;br /&gt;
(TK) - I have provided a summary of key concepts that will help with the idea of resource allocation across the network under the summary for &#039;&#039;&#039;Heuristics for Enforcing Service Level Agreements in a Public Computing Utility&#039;&#039;&#039; &#039;&#039;(&amp;lt;--Scott can you link this directly to the summary...I have no idea how to)&#039;&#039; We can go more in depth with the concepts that catch your eyes. The paper is beautifully written and easy to understand&lt;br /&gt;
&lt;br /&gt;
==Problem Outline==&lt;br /&gt;
&lt;br /&gt;
* How do we define &#039;public&#039; action? How do we monitor &#039;public&#039; action without monitoring every action?&lt;br /&gt;
* How can you make sure your agent is acting according to your instructions?&lt;br /&gt;
* How can we ensure that information we receive through a third-party is legitimate?&lt;br /&gt;
* How do I observe the acts of other agents, particularly public acts?&lt;br /&gt;
* What &#039;&#039;&#039;CAN&#039;&#039;&#039; be observed?&lt;br /&gt;
* How can contracts be made between computers/agents?&lt;br /&gt;
* How can we ensure that contracts are being upheld?&lt;br /&gt;
* What side effects does observance have? For example if everyone can see who buys something online, would that promote or demote using such website?&lt;br /&gt;
&lt;br /&gt;
==Report Outline==&lt;br /&gt;
&lt;br /&gt;
*Abstract&lt;br /&gt;
*Introduction&lt;br /&gt;
**Observability on a Network&lt;br /&gt;
*Automatic Contracts (System-to-System)&lt;br /&gt;
**What Can be Contracted?&lt;br /&gt;
**Determining When to Initiate a Contract&lt;br /&gt;
**States of a Contract&lt;br /&gt;
*Quantifiable Uniform Observation and Reporting of Unmanned Mediation (QUORUM)&lt;br /&gt;
**System Overview&lt;br /&gt;
***Roles in the QUORUM&lt;br /&gt;
***Gossip and Reputation&lt;br /&gt;
****QUORUM Cliques&lt;br /&gt;
***Validating a Contract, or How I Learned to Stop Worrying and Love the QUORUM	&lt;br /&gt;
****Private Contracts&lt;br /&gt;
*Alternatives/Other Approaches to QUORUM&lt;br /&gt;
*The Future of QUORUM&lt;br /&gt;
*Conclusion&lt;br /&gt;
&lt;br /&gt;
==Focus==&lt;br /&gt;
&lt;br /&gt;
As we&#039;ve discussed these topics we&#039;ve decided that the focus of our report will be on *Contracts* and the Observation of their fulfillment. We are also under the assumption that participants are uniquely and universally identifiable.&lt;br /&gt;
&lt;br /&gt;
==Members==&lt;br /&gt;
* Seyyed Hadi Sajjadpour&lt;br /&gt;
* Tarjit Komal&lt;br /&gt;
* Scott Lyons&lt;br /&gt;
* Andrew Luczak&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category_talk:2011-O%26C&amp;diff=9011</id>
		<title>Category talk:2011-O&amp;C</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category_talk:2011-O%26C&amp;diff=9011"/>
		<updated>2011-03-31T15:10:11Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: /* Summary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Papers==&lt;br /&gt;
&lt;br /&gt;
===Observability===&lt;br /&gt;
&lt;br /&gt;
* How do we define &#039;public&#039; action? How do we monitor &#039;public&#039; action without monitoring every action?&lt;br /&gt;
* How can you make sure your agent is acting according to your instructions?&lt;br /&gt;
* How can we ensure that information we receive through a third-party is legitimate?&lt;br /&gt;
* What &#039;&#039;&#039;CAN&#039;&#039;&#039; be observed?&lt;br /&gt;
&lt;br /&gt;
==== Contract Monitoring ====&lt;br /&gt;
&lt;br /&gt;
[http://dx.doi.org/10.1007/978-3-642-03668-2_29 Contract Monitoring in Agent-Based Systems: Case Study] from Lecture Notes in Computer Science by Jiří Hodík, Jiří Vokřínek and Michal Jakob, 2009&lt;br /&gt;
&lt;br /&gt;
===== Abstract =====&lt;br /&gt;
&lt;br /&gt;
Monitoring of fulfilment of obligations defined by electronic contracts in distributed domains is presented in this paper. A two-level model of contract-based systems and the types of observations needed for contract monitoring are introduced. The observations (inter-agent communication and agents’ actions) are collected and processed by the contract observation and analysis pipeline. The presented approach has been utilized in a multi-agent system for electronic contracting in a modular certification testing domain.&lt;br /&gt;
&lt;br /&gt;
===== Summary =====&lt;br /&gt;
Andrew&lt;br /&gt;
&lt;br /&gt;
==== Monitoring Service Contracts ====&lt;br /&gt;
&lt;br /&gt;
[http://dx.doi.org/10.1007/3-540-45705-4_15 An Agent-Based Framework for Monitoring Service Contracts] from Lecture Notes in Computer Science by Helmut Kneer, Henrik Stormer, Harald Häuschen and Burkhard Stiller, 2002&lt;br /&gt;
&lt;br /&gt;
===== Abstract =====&lt;br /&gt;
&lt;br /&gt;
Within the past few years, the variety of real-time multimedia streaming services on the Internet has grown steadily. Performance of streaming services is very sensitive to traffic congestion and results very often in poor service quality on today’s best effort Internet. Reasons include the lack of any traffic prioritization mechanisms on the network level and its dependence on the cooperation of several Internet Service Providers and their reliable transmission of data packets. Therefore, service differentiation and its reliable delivery must be enforced on a business level through the introduction of service contracts between service providers and their customers. However, compliance with such service contracts is the crucial point that decides about successful improvement of the service delivery process. For that reason, an agent-based monitoring framework has been developed and introduced enabling the use of mobile agents to monitor compliance with contractual agreements between service providers and service customers. This framework describes the setup and the functionality of different kinds of mobile agents that allow monitoring of service contracts across domains of multiple service providers.&lt;br /&gt;
&lt;br /&gt;
===== Summary =====&lt;br /&gt;
Andrew&lt;br /&gt;
&lt;br /&gt;
===Contracts===&lt;br /&gt;
&lt;br /&gt;
* What can or can&#039;t be contracted?&lt;br /&gt;
* How can you quantify abstract resources?&lt;br /&gt;
* How can two or more parties agree with a minimum of intervention?&lt;br /&gt;
&lt;br /&gt;
Some forms of contracts exist in the form of Service Level Agreements, and there have been efforts made to automate this process:&lt;br /&gt;
&lt;br /&gt;
==== AURIC ====&lt;br /&gt;
[http://dx.doi.org/10.1007/978-3-540-75694-1_21 AURIC: A Scalable and Highly Reusable SLA Compliance Auditing Framework] from Lecture Notes in Computer Science, by Hasan and Burkhard Stiller, 2007.&lt;br /&gt;
&lt;br /&gt;
===== Abstract =====&lt;br /&gt;
Service Level Agreements (SLA) are needed to allow business interactions to rely on Internet services. Service Level Objectives (SLO) specify the committed performance level of a service. Thus, SLA compliance auditing aims at verifying these commitments. Since SLOs for various application services and end-to-end performance definitions vary largely, automated auditing of SLA compliances poses the challenge to an auditing framework. Moreover, end-to-end performance data are potentially large for a provider with many customers. Therefore, this paper presents a scalable and highly reusable auditing framework and a prototype, termed AURIC (Auditing Framework for Internet Services), whose components can be distributed across different domains.&lt;br /&gt;
&lt;br /&gt;
===== Summary =====&lt;br /&gt;
TJ&lt;br /&gt;
&lt;br /&gt;
==== Bandwidth ====&lt;br /&gt;
[http://dx.doi.org/10.1007/978-3-540-30189-9_19 SLA-Driven Flexible Bandwidth Reservation Negotiation Schemes for QoS Aware IP Networks] from Lecture Notes in Computer Science by Gerard Parr and Alan Marshall, 2004.&lt;br /&gt;
&lt;br /&gt;
===== Abstract =====&lt;br /&gt;
We present a generic Service Level Agreement (SLA)-driven service provisioning architecture, which enables dynamic and flexible bandwidth reservation schemes on a per- user or a per-application basis. Various session level SLA negotiation schemes involving bandwidth allocation, service start time and service duration parameters are introduced and analysed. The results show that these negotiation schemes can be utilised for the benefits of both end user and network provide such as getting the highest individual SLA optimisation in terms of Quality of Service (QoS) and price. A prototype based on an industrial agent platform has also been built to demonstrate the negotiation scenario and this is presented and discussed.&lt;br /&gt;
&lt;br /&gt;
===== Summary =====&lt;br /&gt;
Claimed by Scott&lt;br /&gt;
&lt;br /&gt;
==== Dynamic Adaptation ====&lt;br /&gt;
[http://dx.doi.org/10.1007/978-3-540-89652-4_28 Context-Driven Autonomic Adaptation of SLA] from Lecture notes in Computer Science, by authors Caroline Herssens, Stéphane Faulkner and Ivan Jureta, 2008.&lt;br /&gt;
&lt;br /&gt;
===== Abstract =====&lt;br /&gt;
Service Level Agreements (SLAs) are used in Service-Oriented Computing to define the obligations of the parties involved in a transaction. SLAs define the service users’ Quality of Service (QoS) requirements that the service provider should satisfy. Requirements defined once may not be satisfiable when the context of the web services changes (e.g., when requirements or resource availability changes). Changes in the context can make SLAs obsolete, making SLA revision necessary. We propose a method to autonomously monitor the services’ context, and adapt SLAs to avoid obsolescence thereof.&lt;br /&gt;
&lt;br /&gt;
===== Summary =====&lt;br /&gt;
TJ&lt;br /&gt;
&lt;br /&gt;
==== Heuristics for Enforcing Service Level Agreements ====&lt;br /&gt;
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.127.8674&amp;amp;rep=rep1&amp;amp;type=pdf Heuristics for Enforcing Service Level Agreements in a Public Computing Utility] A masters thesis paper by Balasubramaneyam Maniymaran.&lt;br /&gt;
&lt;br /&gt;
===== Abstract =====&lt;br /&gt;
With the increasing popularity of consumer and research oriented wide-area applications,there arises a need for a robust and efﬁcient wide-area resource management system. Even though there exists number of systems for wide area resource management, they fail to couple the QoS management with cost management, which is the key issue in pushing such a system to be commercially successful. Further, the lack of IT skills within the companies arouses the need of decoupling service management from the underlying complex wide-area resource management. A public computing utility (PCU) addresses both these issues, and, in addition, it creates a market place for the selling idling computing resources. &lt;br /&gt;
&lt;br /&gt;
This work proposes a PCU model addressing the above mentioned issues and develops heuristics to enforce QoS in that model. A new concept called virtual clusters (VCs) is introduced as semi-dynamic, service speciﬁc resource partitions of a PCU, optimizing cost, QoS, and resource utilization. This thesis describes the methodology of VC creation, analyses the formulation of a VC creation into an optimization problem, and develops solution heuristics. The concept of VC is supported by two other concepts introduced here namely anchor point (AP) and overload partition (OLP). The concept of AP is used to represent the demand distribution in a network that assists the problem formulation of the VC creation and SLA management. The concept of overload partition is used to handle the demand spikes in a VC.&lt;br /&gt;
&lt;br /&gt;
In a PCU, the VC management is implemented in two phases: the ﬁrst is an off-line phase of creating a VC that selects the appropriate resources and allocates them for the particular service; and the second phase employs on-line scheduling heuristic to distribute the jobs/requests from the APs among the VC nodes to achieve load balancing. A detailed simulation study is conducted to analyze the performance of different VC conﬁgurations for different load conditions and scheduling schemes and this performance is compared with a fully dynamic resource allocation scheme called Service Grid. The results verify the novelty of the VC concept.&lt;br /&gt;
&lt;br /&gt;
===== Summary =====&lt;br /&gt;
One key concept that we should take from this paper is the way they decided how to allocate the resources. Here is a brief but excellent point to consider:&lt;br /&gt;
*In a public computing utility (PCU), the virtyal cluster (VC) management is implemented in two phases: the ﬁrst is an off-line phase of creating a VC that selects the appropriate resources and allocates them for the particular service; and the second phase employs on-line scheduling heuristic to distribute the jobs/requests from the anchor points (AP) among the VC nodes to achieve load balancing. A detailed simulation study is conducted to analyze the performance of different VC conﬁgurations for different load conditions and scheduling schemes and this performance is compared with a fully dynamic resource allocation scheme called Service Grid. The results verify the novelty of the VC concept.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;KEY CONCEPTS&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The key features of the PCU Model are:&#039;&#039;&lt;br /&gt;
*an ISP like service structure&lt;br /&gt;
*proposing the resource profiling scheme for resource registration&lt;br /&gt;
*addressing scalability by developing PCU structure made up of domains&lt;br /&gt;
*incorporating peering technology for inter-domain information dissemination &lt;br /&gt;
*SLA based service instantiation and monitoring&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The key concepts of the VCs idea in this paper are:&#039;&#039;&lt;br /&gt;
*it mathematically formulates the trade-off between achieving the best QoS and reducing the system cost, making it best suitable for commercial infrastructures&lt;br /&gt;
*even though multiple services can occupy a single resource and the service–resource attachments can change with time, a virtualized static logical resource set exposed to the service origin (SO) hides the complexity&lt;br /&gt;
*being a semi-dynamic scheme, a VC can reshape itself matching the varying demand pattern, at the same time the static virtualization to the SO simplifying the service management&lt;br /&gt;
*the optimization based VC creation results in better resource utilization&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The key concept to anchor points:&#039;&#039;&lt;br /&gt;
*By providing a representation of demand distribution in a network, the concept of anchor point enables a client-centric resource allocation for widearea services.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;The key attributes of Overload Partitions:&#039;&#039;&lt;br /&gt;
*they are selected via an optimization process and they are shared among multiple services.&lt;br /&gt;
*Provides a cost effective, but still QoS obeying solution to handle demand spikes in the network&lt;br /&gt;
&lt;br /&gt;
==== Service Level Agreement in Cloud Computing ====&lt;br /&gt;
[http://knoesis.wright.edu/library/download/OOPSLA_cloud_wsla_v3.pdf SLAs in Cloud Computing] A paper written by Pankesh Patel, Ajith Ranabahu, Amit Sheth.&lt;br /&gt;
&lt;br /&gt;
===== Abstract =====&lt;br /&gt;
Cloud computing that provides cheap and pay-as-you-go computing resources is rapidly gaining momentum as an alternative to traditional IT Infrastructure. As more and more consumers delegate their tasks to cloud providers, Service Level Agreements(SLA) between consumers and providers emerge as a key aspect. Due to the dynamic nature of the cloud, continuous monitoring on Quality of Service (QoS)attributes is necessary to enforce SLAs. Also numerous other factors such as trust (on the cloud provider) come into consideration, particularly for enterprise customers that may outsource its critical data. This complex nature of the cloud landscape warrants a sophisticated means of managing SLAs. This paper proposes a mechanism for managing SLAs in a cloud computing environment using the Web Service Level Agreement(WSLA) framework, developed for SLA monitoring and SLA enforcement in a Service Oriented Architecture (SOA). We use the third party support feature of WSLA to delegate monitoring and enforcement tasks to other entities in order to solve the trust issues. We also present a real world use case to validate our proposal.&lt;br /&gt;
&lt;br /&gt;
===== Summary =====&lt;br /&gt;
Claimed by Scott&lt;br /&gt;
&lt;br /&gt;
==== Service Level Agreements on IP Networks ==== &lt;br /&gt;
&lt;br /&gt;
By Dinesh C. Verma, IBM T. J Watson Research center&lt;br /&gt;
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1323286&amp;amp;tag=1&lt;br /&gt;
&lt;br /&gt;
===== Abstract =====&lt;br /&gt;
Abstract: This paper provides an overview of service-level agreements in IP networks. It looks at the typical components of a service-level agreement, and identifies three common approaches that are used to satisfy service level agreements in IP networks. The implications of using the approaches in the context of a network service provider, a hosting service provider, and an enterprise are examined. While most providers currently offer a static insurance approach towards supporting service level agreements, the schemes that can lead to more dynamic approaches are identified.&lt;br /&gt;
&lt;br /&gt;
===== Summary =====&lt;br /&gt;
&lt;br /&gt;
(HS) This paper starts off by talking about different components of a service level agreement. These components include:&lt;br /&gt;
1) A description of the nature of service to be provided&lt;br /&gt;
2) The expected performance level of the service, specifically its reliability and responsiveness&lt;br /&gt;
3) The time-frame for response and problem resolution&lt;br /&gt;
4) The process for monitoring and reporting the service level&lt;br /&gt;
5) The consequences for the service provider not meeting its obligations&lt;br /&gt;
6) Escape clauses and constraints.&lt;br /&gt;
&lt;br /&gt;
Then they give three examples of Service level agreements on IP Networks:&lt;br /&gt;
1) Network Connectivity Services&lt;br /&gt;
2) Hosting Services&lt;br /&gt;
3) Integrated services&lt;br /&gt;
&lt;br /&gt;
And for each of the above three, they suggest some availability, performance and reliability clauses. I think that three notions&lt;br /&gt;
of &#039;availability, reliability and performance&#039; could be three parameters that the scheme we are designing should have for each contract.&lt;br /&gt;
&lt;br /&gt;
After this they discuss three different approaches to support SLAs&lt;br /&gt;
1) Insurance Approach&lt;br /&gt;
2) Provisioning Approach&lt;br /&gt;
3) Adaptive Approach&lt;br /&gt;
&lt;br /&gt;
==== Trustworthiness of New Contracts ====&lt;br /&gt;
&lt;br /&gt;
[http://dx.doi.org/10.1007/978-3-642-10203-5_12 Determining the Trustworthiness of New Electronic Contracts] from Lecture Notes in Computer Science by Paul Groth, Simon Miles, Sanjay Modgil, Nir Oren, Michael Luck and Yolanda Gil, 2009.&lt;br /&gt;
&lt;br /&gt;
===== Abstract =====&lt;br /&gt;
&lt;br /&gt;
Expressing contractual agreements electronically potentially allows agents to automatically perform functions surrounding contract use: establishment, fulfilment, renegotiation etc. For such automation to be used for real business concerns, there needs to be a high level of trust in the agent-based system. While there has been much research on simulating trust between agents, there are areas where such trust is harder to establish. In particular, contract proposals may come from parties that an agent has had no prior interaction with and, in competitive business-to-business environments, little reputation information may be available. In human practice, trust in a proposed contract is determined in part from the content of the proposal itself, and the similarity of the content to that of prior contracts, executed to varying degrees of success. In this paper, we argue that such analysis is also appropriate in automated systems, and to provide it we need systems to record salient details of prior contract use and algorithms for assessing proposals on their content. We use provenance technology to provide the former and detail algorithms for measuring contract success and similarity for the latter, applying them to an aerospace case study.&lt;br /&gt;
&lt;br /&gt;
===== Summary =====&lt;br /&gt;
Andrew&lt;br /&gt;
&lt;br /&gt;
==== Web Privacy with P3P ==== &lt;br /&gt;
&lt;br /&gt;
http://www.oreilly.de/catalog/webprivp3p/&lt;br /&gt;
&lt;br /&gt;
This book talks about P3P and how companies and web developers can comply with p3p.&lt;br /&gt;
Also check http://www.w3.org/P3P/&lt;br /&gt;
&lt;br /&gt;
===== Summary =====&lt;br /&gt;
Hadi&lt;br /&gt;
&lt;br /&gt;
==Increasing Observability==&lt;br /&gt;
&lt;br /&gt;
Like we discussed on Thursday, the real question when looking at observability is whether an action can be viewed, and who can view it. In the real world, you have a chance of being observed no matter what you do; the Internet, on the other hand, reduces this observability and instead offers a modicum of anonymity.&lt;br /&gt;
&lt;br /&gt;
As the possibility of being observed increases, behavior adjusts to encourage the positive reputation of the actor or to conform with laws and regulations. This is the main benefit we wish to obtain by increasing the observability of digital actions. While omnipresent observation is possible on a computer network, in terms of observing contracts it might be more efficient to impose the possibility of being observed.&lt;br /&gt;
&lt;br /&gt;
===A Possible System for Increasing Observability of Contracts and Actions?===&lt;br /&gt;
&lt;br /&gt;
In class on Thursday, Scott brought up the idea of tracking a contract by making a minimal set of details available to all (i.e., everyone knows the parties involved in the contract, and whether the contract was fulfilled). Taking this a little further, our group considered the existence of an anonymous, distributed quorum of observers. &lt;br /&gt;
&lt;br /&gt;
This quorum would, upon the creation of a contract, be given a summary of the contract (for example, Company A has agreed to cache data for Company B on a given day, while Company B will reciprocate the following day). Over the term of the contract, the individual systems in the quorum would test the contract to see if the terms had been met. At the end of the contract period, the systems would provide a &amp;quot;vote&amp;quot; declaring whether they witnessed the contract being fulfilled.&lt;br /&gt;
&lt;br /&gt;
This system could also be extended to monitor general actions. Consider again this set of observers, however, now they connect at random to various websites, and take a snapshot of all connections to it. At any given time, no other user knows which system the observers will be monitoring. In other words, the observers are analogous to police patrols, albeit with no set patrol route.&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category:2011-Contracts&amp;diff=8668</id>
		<title>Category:2011-Contracts</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category:2011-Contracts&amp;diff=8668"/>
		<updated>2011-03-17T17:52:38Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the category page for things regarding contracts&lt;br /&gt;
&lt;br /&gt;
* (TK) When I think of contracts, I think of:&lt;br /&gt;
** (TK) who is responsible for the terms and agreements&lt;br /&gt;
** (TK) who is made aware of the agreement? Is it broadcast to everyone or only a select few people?&lt;br /&gt;
** (TK) who/what is the governing body?&lt;br /&gt;
*** (SL) Does there even need to be a governing body?&lt;br /&gt;
*** (TK) Essentially who determines whether a contract is fulfilled or not? Should there not be some sort of tracking system to determine whether the requirements of the contracts are completed and once they are the contract is &amp;quot;closed&amp;quot;...&lt;br /&gt;
*** (SL) Might have to defer to Justice here...&lt;br /&gt;
** (TK) the first step should be to find a way to verify the different parties involved in the contract.&lt;br /&gt;
* (HS) What mechanism can be provided to enforce contracts?&lt;br /&gt;
* (SL) Should there be more of a feedback system than COMPLETE/INCOMPLETE?&lt;br /&gt;
* (SL) Does every contract have to be about the exchange of quantifiable &amp;quot;goods&amp;quot;?&lt;br /&gt;
* (TK) Computer are &amp;quot;citizens&amp;quot; of the &amp;quot;vitual&amp;quot; world. As such contracts are from computer to computer, not person to person. Think about it is a purely resource sharing contracts which are automatically done by computers based on their &amp;quot;individual&amp;quot; needs.&lt;br /&gt;
&lt;br /&gt;
[[Category:2011-O&amp;amp;C]]&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category:2011-Contracts&amp;diff=8647</id>
		<title>Category:2011-Contracts</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category:2011-Contracts&amp;diff=8647"/>
		<updated>2011-03-17T17:23:59Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the category page for things regarding contracts&lt;br /&gt;
&lt;br /&gt;
* (TK) When I think of contracts, I think of:&lt;br /&gt;
** (TK) who is responsible for the terms and agreements&lt;br /&gt;
** (TK) who is made aware of the agreement? Is it broadcast to everyone or only a select few people?&lt;br /&gt;
** (TK) who/what is the governing body?&lt;br /&gt;
*** (SL) Does there even need to be a governing body?&lt;br /&gt;
*** (TK) Essentially who determines whether a contract is fulfilled or not? Should there not be some sort of tracking system to determine whether the requirements of the contracts are completed and once they are the contract is &amp;quot;closed&amp;quot;...&lt;br /&gt;
** (TK) the first step should be to find a way to verify the different parties involved in the contract.&lt;br /&gt;
* (HS) What mechanism can be provided to enforce contracts?&lt;br /&gt;
* (SL) Should there be more of a feedback system than COMPLETE/INCOMPLETE?&lt;br /&gt;
* (SL) Does every contract have to be about the exchange of quantifiable &amp;quot;goods&amp;quot;?&lt;br /&gt;
&lt;br /&gt;
[[Category:2011-O&amp;amp;C]]&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category:2011-Observability&amp;diff=8629</id>
		<title>Category:2011-Observability</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category:2011-Observability&amp;diff=8629"/>
		<updated>2011-03-17T15:31:29Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* (TK) How can we observe the information that we, our computer or ourselves, provide the &amp;quot;network&amp;quot; or public is not going to be maliciously used?&lt;br /&gt;
** (TK) One point that was discussed in class was the idea of a digital fingerprint. Is this really feasible? and how would it work?&lt;br /&gt;
&lt;br /&gt;
[[Category:2011-O&amp;amp;C]]&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category:2011-Contracts&amp;diff=8628</id>
		<title>Category:2011-Contracts</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category:2011-Contracts&amp;diff=8628"/>
		<updated>2011-03-17T15:28:41Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the category page for things regarding contracts&lt;br /&gt;
&lt;br /&gt;
* (TK) When I think of contracts, I think of:&lt;br /&gt;
** (TK) who is responsible for the terms and agreements&lt;br /&gt;
** (TK) who is made aware of the agreement? Is it broadcast to everyone or only a select few people?&lt;br /&gt;
** (TK) who/what is the governing body?&lt;br /&gt;
** (TK) the first step should be to find a way to verify the different parties involved in the contract.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:2011-O&amp;amp;C]]&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category:2011-Contracts&amp;diff=8627</id>
		<title>Category:2011-Contracts</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Category:2011-Contracts&amp;diff=8627"/>
		<updated>2011-03-17T15:24:43Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the category page for things regarding contracts&lt;br /&gt;
&lt;br /&gt;
 - (TK) When I think of contracts, I think of:&lt;br /&gt;
	** (TK) who is responsible for the terms and agreements&lt;br /&gt;
	** (TK) who is made aware of the agreement? Is it broadcast to everyone or only a select few people?&lt;br /&gt;
	** (TK) who/what is the governing body?&lt;br /&gt;
	** (TK) the first step should be to find a way to verify the different parties involved in the contract.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:2011-O&amp;amp;C]]&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Observability_%26_Contracts&amp;diff=8242</id>
		<title>Talk:DistOS-2011W Observability &amp; Contracts</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Observability_%26_Contracts&amp;diff=8242"/>
		<updated>2011-03-08T17:35:55Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Observability==&lt;br /&gt;
&lt;br /&gt;
* How do we define &#039;public&#039; action? How do we monitor &#039;public&#039; action without monitoring every action?&lt;br /&gt;
* How can you make sure your agent is acting according to your instructions?&lt;br /&gt;
* How can we ensure that information we receive through a third-party is legitimate?&lt;br /&gt;
&lt;br /&gt;
==Contracts==&lt;br /&gt;
&lt;br /&gt;
* What can or can&#039;t be contracted?&lt;br /&gt;
* How can you quantify abstract resources?&lt;br /&gt;
* How can two or more parties agree with a minimum of intervention?&lt;br /&gt;
&lt;br /&gt;
Some forms of contracts exist in the form of Service Level Agreements, and there have been efforts made to automate this process:&lt;br /&gt;
&lt;br /&gt;
== AURIC ==&lt;br /&gt;
[http://dx.doi.org/10.1007/978-3-540-75694-1_21 AURIC: A Scalable and Highly Reusable SLA Compliance Auditing Framework] from Lecture Notes in Computer Science, by Hasan and Burkhard Stiller, 2007.&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Service Level Agreements (SLA) are needed to allow business interactions to rely on Internet services. Service Level Objectives (SLO) specify the committed performance level of a service. Thus, SLA compliance auditing aims at verifying these commitments. Since SLOs for various application services and end-to-end performance definitions vary largely, automated auditing of SLA compliances poses the challenge to an auditing framework. Moreover, end-to-end performance data are potentially large for a provider with many customers. Therefore, this paper presents a scalable and highly reusable auditing framework and a prototype, termed AURIC (Auditing Framework for Internet Services), whose components can be distributed across different domains.&lt;br /&gt;
&lt;br /&gt;
== Bandwidth ==&lt;br /&gt;
[http://dx.doi.org/10.1007/978-3-540-30189-9_19 SLA-Driven Flexible Bandwidth Reservation Negotiation Schemes for QoS Aware IP Networks] from Lecture Notes in Computer Science by Gerard Parr and Alan Marshall, 2004.&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
We present a generic Service Level Agreement (SLA)-driven service provisioning architecture, which enables dynamic and flexible bandwidth reservation schemes on a per- user or a per-application basis. Various session level SLA negotiation schemes involving bandwidth allocation, service start time and service duration parameters are introduced and analysed. The results show that these negotiation schemes can be utilised for the benefits of both end user and network provide such as getting the highest individual SLA optimisation in terms of Quality of Service (QoS) and price. A prototype based on an industrial agent platform has also been built to demonstrate the negotiation scenario and this is presented and discussed.&lt;br /&gt;
&lt;br /&gt;
== Dynamic Adaptation ==&lt;br /&gt;
[http://dx.doi.org/10.1007/978-3-540-89652-4_28 Context-Driven Autonomic Adaptation of SLA] from Lecture notes in Computer Science, by authors Caroline Herssens, Stéphane Faulkner and Ivan Jureta, 2008.&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Service Level Agreements (SLAs) are used in Service-Oriented Computing to define the obligations of the parties involved in a transaction. SLAs define the service users’ Quality of Service (QoS) requirements that the service provider should satisfy. Requirements defined once may not be satisfiable when the context of the web services changes (e.g., when requirements or resource availability changes). Changes in the context can make SLAs obsolete, making SLA revision necessary. We propose a method to autonomously monitor the services’ context, and adapt SLAs to avoid obsolescence thereof.&lt;br /&gt;
&lt;br /&gt;
== Heuristics for Enforcing Service Level Agreements ==&lt;br /&gt;
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.127.8674&amp;amp;rep=rep1&amp;amp;type=pdf Heuristics for Enforcing Service Level Agreements in a Public Computing Utility] A masters thesis paper by Balasubramaneyam Maniymaran.&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
With the increasing popularity of consumer and research oriented wide-area applications,there arises a need for a robust and efﬁcient wide-area resource management system. Even though there exists number of systems for wide area resource management, they fail to couple the QoS management with cost management, which is the key issue in pushing such a system to be commercially successful. Further, the lack of IT skills within the companies arouses the need of decoupling service management from the underlying complex wide-area resource management. A public computing utility (PCU) addresses both these issues, and, in addition, it creates a market place for the selling idling computing resources. &lt;br /&gt;
&lt;br /&gt;
This work proposes a PCU model addressing the above mentioned issues and develops heuristics to enforce QoS in that model. A new concept called virtual clusters (VCs) is introduced as semi-dynamic, service speciﬁc resource partitions of a PCU, optimizing cost, QoS, and resource utilization. This thesis describes the methodology of VC creation, analyses the formulation of a VC creation into an optimization problem, and develops solution heuristics. The concept of VC is supported by two other concepts introduced here namely anchor point (AP) and overload partition (OLP). The concept of AP is used to represent the demand distribution in a network that assists the problem formulation of the VC creation and SLA management. The concept of overload partition is used to handle the demand spikes in a VC.&lt;br /&gt;
&lt;br /&gt;
In a PCU, the VC management is implemented in two phases: the ﬁrst is an off-line phase of creating a VC that selects the appropriate resources and allocates them for the particular service; and the second phase employs on-line scheduling heuristic to distribute the jobs/requests from the APs among the VC nodes to achieve load balancing. A detailed simulation study is conducted to analyze the performance of different VC conﬁgurations for different load conditions and scheduling schemes and this performance is compared with a fully dynamic resource allocation scheme called Service Grid. The results verify the novelty of the VC concept.&lt;br /&gt;
&lt;br /&gt;
== Service Level Agreement in Cloud Computing ==&lt;br /&gt;
[http://knoesis.wright.edu/library/download/OOPSLA_cloud_wsla_v3.pdf SLAs in Cloud Computing] A paper written by Pankesh Patel, Ajith Ranabahu, Amit Sheth.&lt;br /&gt;
&lt;br /&gt;
=== Abstact ===&lt;br /&gt;
Cloud computing that provides cheap and pay-as-you-go computing resources is rapidly gaining momentum as an alternative to traditional IT Infrastructure. As more and more consumers delegate their tasks to cloud providers, Service Level Agreements(SLA) between consumers and providers emerge as a key aspect. Due to the dynamic nature of the cloud, continuous monitoring on Quality of Service (QoS)attributes is necessary to enforce SLAs. Also numerous other factors such as trust (on the cloud provider) come into consideration, particularly for enterprise customers that may outsource its critical data. This complex nature of the cloud landscape warrants a sophisticated means of managing SLAs. This paper proposes a mechanism for managing SLAs in a cloud computing environment using the Web Service Level Agreement(WSLA) framework, developed for SLA monitoring and SLA enforcement in a Service Oriented Architecture (SOA). We use the third party support feature of WSLA to delegate monitoring and enforcement tasks to other entities in order to solve the trust issues. We also present a real world use case to validate our proposal.&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Observability_%26_Contracts&amp;diff=8241</id>
		<title>Talk:DistOS-2011W Observability &amp; Contracts</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Observability_%26_Contracts&amp;diff=8241"/>
		<updated>2011-03-08T17:28:39Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Observability==&lt;br /&gt;
&lt;br /&gt;
* How do we define &#039;public&#039; action? How do we monitor &#039;public&#039; action without monitoring every action?&lt;br /&gt;
* How can you make sure your agent is acting according to your instructions?&lt;br /&gt;
* How can we ensure that information we receive through a third-party is legitimate?&lt;br /&gt;
&lt;br /&gt;
==Contracts==&lt;br /&gt;
&lt;br /&gt;
* What can or can&#039;t be contracted?&lt;br /&gt;
* How can you quantify abstract resources?&lt;br /&gt;
* How can two or more parties agree with a minimum of intervention?&lt;br /&gt;
&lt;br /&gt;
Some forms of contracts exist in the form of Service Level Agreements, and there have been efforts made to automate this process:&lt;br /&gt;
&lt;br /&gt;
== AURIC ==&lt;br /&gt;
[http://dx.doi.org/10.1007/978-3-540-75694-1_21 AURIC: A Scalable and Highly Reusable SLA Compliance Auditing Framework] from Lecture Notes in Computer Science, by Hasan and Burkhard Stiller, 2007.&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Service Level Agreements (SLA) are needed to allow business interactions to rely on Internet services. Service Level Objectives (SLO) specify the committed performance level of a service. Thus, SLA compliance auditing aims at verifying these commitments. Since SLOs for various application services and end-to-end performance definitions vary largely, automated auditing of SLA compliances poses the challenge to an auditing framework. Moreover, end-to-end performance data are potentially large for a provider with many customers. Therefore, this paper presents a scalable and highly reusable auditing framework and a prototype, termed AURIC (Auditing Framework for Internet Services), whose components can be distributed across different domains.&lt;br /&gt;
&lt;br /&gt;
== Bandwidth ==&lt;br /&gt;
[http://dx.doi.org/10.1007/978-3-540-30189-9_19 SLA-Driven Flexible Bandwidth Reservation Negotiation Schemes for QoS Aware IP Networks] from Lecture Notes in Computer Science by Gerard Parr and Alan Marshall, 2004.&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
We present a generic Service Level Agreement (SLA)-driven service provisioning architecture, which enables dynamic and flexible bandwidth reservation schemes on a per- user or a per-application basis. Various session level SLA negotiation schemes involving bandwidth allocation, service start time and service duration parameters are introduced and analysed. The results show that these negotiation schemes can be utilised for the benefits of both end user and network provide such as getting the highest individual SLA optimisation in terms of Quality of Service (QoS) and price. A prototype based on an industrial agent platform has also been built to demonstrate the negotiation scenario and this is presented and discussed.&lt;br /&gt;
&lt;br /&gt;
== Dynamic Adaptation ==&lt;br /&gt;
[http://dx.doi.org/10.1007/978-3-540-89652-4_28 Context-Driven Autonomic Adaptation of SLA] from Lecture notes in Computer Science, by authors Caroline Herssens, Stéphane Faulkner and Ivan Jureta, 2008.&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Service Level Agreements (SLAs) are used in Service-Oriented Computing to define the obligations of the parties involved in a transaction. SLAs define the service users’ Quality of Service (QoS) requirements that the service provider should satisfy. Requirements defined once may not be satisfiable when the context of the web services changes (e.g., when requirements or resource availability changes). Changes in the context can make SLAs obsolete, making SLA revision necessary. We propose a method to autonomously monitor the services’ context, and adapt SLAs to avoid obsolescence thereof.&lt;br /&gt;
&lt;br /&gt;
== Heuristics for Enforcing Service Level Agreements ==&lt;br /&gt;
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.127.8674&amp;amp;rep=rep1&amp;amp;type=pdf Heuristics for Enforcing Service Level Agreements in a Public Computing Utility] A masters thesis paper by Balasubramaneyam Maniymaran.&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
With the increasing popularity of consumer and research oriented wide-area applications,there arises a need for a robust and efﬁcient wide-area resource management system. Even though there exists number of systems for wide area resource management, they fail to couple the QoS management with cost management, which is the key issue in pushing such a system to be commercially successful. Further, the lack of IT skills within the companies arouses the need of decoupling service management from the underlying complex wide-area resource management. A public computing utility (PCU) addresses both these issues, and, in addition, it creates a market place for the selling idling computing resources. &lt;br /&gt;
&lt;br /&gt;
This work proposes a PCU model addressing the above mentioned issues and develops heuristics to enforce QoS in that model. A new concept called virtual clusters (VCs) is introduced as semi-dynamic, service speciﬁc resource partitions of a PCU, optimizing cost, QoS, and resource utilization. This thesis describes the methodology of VC creation, analyses the formulation of a VC creation into an optimization problem, and develops solution heuristics. The concept of VC is supported by two other concepts introduced here namely anchor point (AP) and overload partition (OLP). The concept of AP is used to represent the demand distribution in a network that assists the problem formulation of the VC creation and SLA management. The concept of overload partition is used to handle the demand spikes in a VC.&lt;br /&gt;
&lt;br /&gt;
In a PCU, the VC management is implemented in two phases: the ﬁrst is an off-line phase of creating a VC that selects the appropriate resources and allocates them for the particular service; and the second phase employs on-line scheduling heuristic to distribute the jobs/requests from the APs among the VC nodes to achieve load balancing. A detailed simulation study is conducted to analyze the performance of different VC conﬁgurations for different load conditions and scheduling schemes and this performance is compared with a fully dynamic resource allocation scheme called Service Grid. The results verify the novelty of the VC concept.&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Eucalyptus&amp;diff=7490</id>
		<title>DistOS-2011W Eucalyptus</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Eucalyptus&amp;diff=7490"/>
		<updated>2011-02-27T19:28:07Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: Created page with &amp;quot;=Introduction=  Describe the system(s) that you examined or compared.  Why did you choose them?  Be sure to specify a thesis that you argue in the rest of the document.  Since th…&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
&lt;br /&gt;
Describe the system(s) that you examined or compared.  Why did you choose them?  Be sure to specify a thesis that you argue in the rest of the document.  Since this is a report the thesis may be relatively weak; however, an appropriate thesis will help the reader understand why did what you did and why you wrote what you wrote.&lt;br /&gt;
&lt;br /&gt;
End with a paragraph outlining the rest of the document.&lt;br /&gt;
&lt;br /&gt;
Be sure to change the titles of the following sections to match the structure of your paper.  In particular, please try to make them less generic.  What follows is just a suggestion; the document will be evaluated in part on the quality of writing, and good writing sometimes requires some flexibility.&lt;br /&gt;
&lt;br /&gt;
=Systems/Programs in the Space=&lt;br /&gt;
&lt;br /&gt;
Give an overview of the area you are examining.  What systems/programs are out there?&lt;br /&gt;
&lt;br /&gt;
=Evaluated Systems/Programs=&lt;br /&gt;
&lt;br /&gt;
Describe the systems individually here - their key properties, etc.  Use subsections to describe different implementations if you wish.  Briefly explain why you made the selections you did.&lt;br /&gt;
&lt;br /&gt;
=Experiences/Comparison (multiple sections)=&lt;br /&gt;
&lt;br /&gt;
In multiple sections, describe what you learned.&lt;br /&gt;
&lt;br /&gt;
=Discussion=&lt;br /&gt;
&lt;br /&gt;
What was interesting?  What was surprising?  Here you can go out on tangents relating to your work&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
&lt;br /&gt;
Summarize the report, point to future work.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&lt;br /&gt;
Give references in proper form (not just URLs if possible, give dates of access).&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011&amp;diff=7489</id>
		<title>Distributed OS: Winter 2011</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011&amp;diff=7489"/>
		<updated>2011-02-27T19:27:38Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: /* Implementation report (undergrads) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Evaluation==&lt;br /&gt;
&lt;br /&gt;
Grades in this class will be determined based on the following criteria.&lt;br /&gt;
&lt;br /&gt;
Undergraduate Students:&lt;br /&gt;
* 20% Class participation&lt;br /&gt;
* 20% Wiki participation&lt;br /&gt;
* 10% Group project oral presentation (April 5th in class)&lt;br /&gt;
* 30% Group project written report (Due April 11th)&lt;br /&gt;
* 20% Implementation report (Due March 1st)&lt;br /&gt;
&lt;br /&gt;
Graduate Students:&lt;br /&gt;
* 15% Class participation&lt;br /&gt;
* 20% Wiki participation&lt;br /&gt;
* 10% Group project oral presentation (April 5th in class)&lt;br /&gt;
* 30% Group project written report (Due April 11th)&lt;br /&gt;
* 25% Literature review paper (Due March 1st)&lt;br /&gt;
&lt;br /&gt;
Proposals for Implementation reports &amp;amp; Literature reviews should be emailed to Prof. Somayaji by &#039;&#039;&#039;February 1st&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Implementation report (undergrads)===&lt;br /&gt;
&lt;br /&gt;
An implementation report is a 5-10 page paper that either&lt;br /&gt;
# describes in detail one existing software system with distributed OS-like properties,&lt;br /&gt;
# compare and contrasts an important characteristic of 3 or more software systems with distributed OS-like properties, or&lt;br /&gt;
# reports on experiences setting up and using a software system with distributed OS-like properties.&lt;br /&gt;
Topics for an implementation report must be approved by Prof. Somayaji.&lt;br /&gt;
&lt;br /&gt;
Implementation reports for Winter 2011:&lt;br /&gt;
* [[DistOS-2011W NTP |NTP]]&lt;br /&gt;
* [[DistOS-2011W Globus |Globus Toolkit]]&lt;br /&gt;
* [[DistOS-2011W Implementation Template|Implementation Template]]&lt;br /&gt;
* [[DistOS-2011W BigTable|BigTable]]&lt;br /&gt;
* [[DistOS-2011W Cassandra and Hamachi|Cassandra and Hamachi]]&lt;br /&gt;
* [[DistOS-2011W Wuala |Wuala]]&lt;br /&gt;
* [[DistOS-2011W FWR |FWR]]&lt;br /&gt;
* [[DistOS-2011W Plan 9| Plan 9]]&lt;br /&gt;
* [[DistOS-2011W Akamai and CDN| Akamai and CDN]]&lt;br /&gt;
* [[DistOS-2011W Diaspora| Diaspora]]&lt;br /&gt;
* [[DistOS-2011W Eucalyptus |Eucalyptus]]&lt;br /&gt;
&lt;br /&gt;
Students: please add your report above following the template.&lt;br /&gt;
&lt;br /&gt;
===Literature review paper (graduate students)===&lt;br /&gt;
&lt;br /&gt;
The literature review paper should be a 8-12 page paper that reviews research and well-known commercial work in an area of distributed operating systems research or a closely related area.&lt;br /&gt;
&lt;br /&gt;
Literature Review papers for Winter 2011:&lt;br /&gt;
* [[DistOS-2011W Naming and Locating Objects in Distributed Systems|Naming and Locating Objects in Distributed Systems]]&lt;br /&gt;
* [[DistOS-2011W User Controlled Bandwidth: How Social Protocols Affect Network Protocols and Our Need for Speed|User Controlled Bandwidth]]&lt;br /&gt;
&lt;br /&gt;
Students: please add your paper above.&lt;br /&gt;
&lt;br /&gt;
==Readings==&lt;br /&gt;
&lt;br /&gt;
===January 13, 2011===&lt;br /&gt;
[http://keys.ccrcentral.net/ccr/writing/ CCR]  (two papers)&lt;br /&gt;
&lt;br /&gt;
===January 18, 2011===&lt;br /&gt;
[http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-25/oceanstore-sigplan.pdf OceanStore]  and [http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-25/fast2003-pond.pdf Pond]&lt;br /&gt;
&lt;br /&gt;
===February 3, 2011===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[http://ieeexplore.ieee.org.proxy.library.carleton.ca/xpls/abs_all.jsp?arnumber=1450841 Robert E. Kahn, &amp;quot;Resource-Sharing Computer Communications Networks&amp;quot; (1972)]:&#039;&#039;&#039;&lt;br /&gt;
* [http://video.google.com/videoplay?docid=4989933629762859961 Computer Networks - The Heralds of Resource Sharing] (video - optional).&lt;br /&gt;
&lt;br /&gt;
===February 8, 2011===&lt;br /&gt;
&lt;br /&gt;
* Karlin et al. (2008), [http://dx.doi.org.proxy.library.carleton.ca/10.1016/j.comnet.2008.06.012 Autonomous security for autonomous systems].&lt;br /&gt;
&lt;br /&gt;
Optional readings:&lt;br /&gt;
&lt;br /&gt;
* O&#039;Donnell (2009), [http://ieeexplore.ieee.org.proxy.library.carleton.ca/xpls/abs_all.jsp?arnumber=5350725 Prolog to A Survey of BGP Security Issues and Solutions]&lt;br /&gt;
* Butler et al. (2009), [http://ieeexplore.ieee.org.proxy.library.carleton.ca/xpls/abs_all.jsp?arnumber=5357585 A Survey of BGP Security Issues and Solutions]&lt;br /&gt;
&lt;br /&gt;
===February 10, 2011===&lt;br /&gt;
&lt;br /&gt;
* Savage et al. (2000), [http://conferences.sigcomm.org/sigcomm/2000/conf/paper/sigcomm2000-8-4.pdf Practical Network Support For IP Traceback].&lt;br /&gt;
&lt;br /&gt;
===February 15, 2011===&lt;br /&gt;
&lt;br /&gt;
* Satyanarayanan et al. (1990), [http://dx.doi.org.proxy.library.carleton.ca/10.1109/12.54838 Coda: a highly available file system for a distributed workstation environment].&lt;br /&gt;
* Ghemawat et al. (2003), [http://labs.google.com/papers/gfs.html The Google File System].&lt;br /&gt;
&lt;br /&gt;
===February 17, 2011===&lt;br /&gt;
&lt;br /&gt;
* Weil et al. (2006), [http://www.usenix.org/events/osdi06/tech/weil.html Ceph: A Scalable, High-Performance Distributed File System].&lt;br /&gt;
&lt;br /&gt;
===March 1, 2011===&lt;br /&gt;
* Oda et al. (2008), [http://people.scs.carleton.ca/~soma/pubs/oda-ccs-08.pdf SOMA: Mutual Approval for Included Content in Web Pages].&lt;br /&gt;
* Oda &amp;amp; Somayaji (2008), [http://people.scs.carleton.ca/~soma/pubs/oda-asia-08.pdf Content Provider Conflict on the Modern Web].&lt;br /&gt;
&lt;br /&gt;
===March 3, 2011===&lt;br /&gt;
Authentication&lt;br /&gt;
* OpenID&lt;br /&gt;
* non-password authentication (OTP, biometrics, graphical pass)&lt;br /&gt;
&lt;br /&gt;
===Problems to Solve===&lt;br /&gt;
*Attack computers with almost no consequences&lt;br /&gt;
**DDoS&lt;br /&gt;
**botnets&lt;br /&gt;
**capture and analyze private traffic&lt;br /&gt;
**distribute malware&lt;br /&gt;
**tampering with traffic&lt;br /&gt;
**Unauthorized access to data and resources&lt;br /&gt;
**Impersonate computers, individuals, applications&lt;br /&gt;
**Fraud, theft&lt;br /&gt;
**regulate behavior&lt;br /&gt;
&lt;br /&gt;
===Design Principles===&lt;br /&gt;
*subjects of governance: programs and computers&lt;br /&gt;
*bind programs and computers to humans &amp;amp; human organizations, but recognize binding is imperfect&lt;br /&gt;
*recognize that &amp;quot;bad&amp;quot; behavior is always possible.  &amp;quot;good&amp;quot; behavior is enforced through incentives and sanctions.&lt;br /&gt;
*rules will change.  Even rules for rule changes will change. Need a &amp;quot;living document&amp;quot; governing how rules are chosen and enforced.&lt;br /&gt;
&lt;br /&gt;
==Scenarios==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===1: Stopping DDoS===&lt;br /&gt;
Group members: Seyyed, Andrew Schoenrock, Thomas McMahon, Lester Mundt, AbdelRahman, Rakhim Davletkaliyev&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Have the machine routing packets(could be ISP provider) detect suspicious packets, if the packets are signed, then those suspicious packets could be blocked, &lt;br /&gt;
the sender could be put on a black list.&lt;br /&gt;
&lt;br /&gt;
* (AS) Stopping DDoS against files, services, programs, etc&lt;br /&gt;
** (AS) Have file replication built into the system (similar to OceanStore) so that files are always available from different servers&lt;br /&gt;
** (AS) If files are not replicated then we could have a tiered messaging system (at the top level would be OS messages) and servers could then prioritize the incoming traffic. If a given server is experiencing an overload, it could send out a distress signal to its neighbours and then distribute what it is has to them. The system should have a built-in mechanism to re-balance the overall load after something like this happens. This would then mean that any DDoS attack would result in the service being more available.&lt;br /&gt;
*** I like this idea of having service fallover&lt;br /&gt;
*** Expanding on the idea of file replication and sending distress signals to it&#039;s neighbours, I could envision a group of servers that would learn to help each other out.  Lending processing and storage when they are under utilized.  The would sort of form a collective, club or gang.  Members who didn&#039;t contribute ( always fully utilized ) would eventually be identified and banned.  It would be these other computers that the targeted server would rely on for help in this situation. However cool this is it isn&#039; really a solution because one could suppose the attackers might utilize the same strategy to recruit additional help in there attack. &lt;br /&gt;
&lt;br /&gt;
* (AS) Stopping DDoS against specific machines&lt;br /&gt;
** (AS) I don&#039;t think that this should be specifically addressed. I think measures introduced to guard against this will ultimately negatively impact the overall system in terms of performance.&lt;br /&gt;
*** I don&#039;t like the idea of sacrificing the one for the many though.&lt;br /&gt;
**** (AS) The main thing with what I&#039;ve proposed is that the motivation behind doing a DDoS attack is completely gone (by doing one a service would either maintain or increase its overall availability). I think by eliminating the main result of a DDoS attack would mean that there would be no reason to guard against DDoS attacks on a specific machine.&lt;br /&gt;
&lt;br /&gt;
*Stopping DDoS&lt;br /&gt;
** Many of the DDoS attacks utilize the property of anonymity.  These services serve anyone who requests there service.  Many DDoS attacks then ensure sufficient traffic that the computer behind the service can no longer cope.  If we remove anonymity and only serve &#039;known&#039; parties the spurious requests would be ignored.   So we need to &#039;know&#039; who our friends are.&lt;br /&gt;
*** This of course requires a form of unspoofable authentication unlike IP. &lt;br /&gt;
**** (RD) Serving only &#039;known&#039; parties reduces the distribution of information, or at least its rate. I was thinking of removing anonymity on a lower level, so that any party that&#039;s not anonymous while sending a packet to your machine is considered &#039;known&#039;, and anything unknown (unsigned, unrepresented in some way) is blocked. So, we don&#039;t really need to &#039;know&#039; who our friends are, we just need to know who aren&#039;t. &lt;br /&gt;
**** (RD) Another thing I had in mind is punishment in case a &#039;known&#039; party participates in DDoS-attack: not punishing the owner of that machine (who probably is a victim as well), but the software or hardware in some sense. &lt;br /&gt;
&lt;br /&gt;
*Stopping DDoS&lt;br /&gt;
** (RD) How about developing such a network topology and protocols that make DDoS attacks less efficient or harder to perform? Some sort of CAPTCHA, but for machines and protocols, to distinguish them from bots, maybe? &lt;br /&gt;
&lt;br /&gt;
*Stopping DDoS&lt;br /&gt;
** I&#039;m not sure what it means by stopping, I don&#039;t think we can stop DDos given the way things are currently ran, we can only block it. From my knowledge most softwares that stop DDoS do so by blocking, or even complete shut down like Mccolo.&lt;br /&gt;
&lt;br /&gt;
*Stopping DDos&lt;br /&gt;
**One method is to use the same way of eliminating DoS by rejecting a specific rate of subsequent requests but from irrelevant sources.&lt;br /&gt;
&lt;br /&gt;
*How we could stop DDoS would be to have each connection to the internet assigned to a particular identity. This identity would be used to verify who is attempting connections. The reason DDoS works is because currently, IP addresses can be spoofed. The only way to verify an identity is to request a response, but by then the damage is done. With a verified identity, connection attempts being routed can be verified during transmission, so that the request may not necessarily even reach the destination host.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Basically, we need some encryption system using keys so that as the packets are being routed, the identity of the packet&#039;s sender can be verified. Ideally the decryption would be trivial so as to prevent noticeable latency. Because an identity is verified, if there is spoofing of packets, they would be dropped during the routing. If all the identities are verified and are still attempting a DDoS attack, the attacker&#039;s identity will be traced back to the attacker.&lt;br /&gt;
&lt;br /&gt;
(RD) (I think we&#039;re not looking low enough. We&#039;re trying to find a solution for this problem assuming the system that made that problem possible is still unchanged. We enforce more security by identification, encryption, etc, but the system is still problem-prone. This will allow to identify an attacker, but after the attack was started (or even finished). It&#039;s like trying to eliminate theft from a society of poor, unemployed, uneducated people by enforcing more security and punishment. Which will help to reduce the rate and motivation, but can&#039;t stop the possible attack. It is pretty stupid analogy, but rather than policing that society, I want to make them rich, employed and educated, so that thefts are just not efficient way of getting goods for them. So, rather than protecting machines from attacks, I want to make the system where DDoS-attacks are just inappropriate.)&lt;br /&gt;
&lt;br /&gt;
===2: Stopping phishing===&lt;br /&gt;
Group members: Waheed Ahmed, Nicolas Lessard, Raghad Al-Awwad, Tarjit Komal&lt;br /&gt;
&lt;br /&gt;
* A way of automatically checking the signature of a message to make sure it really is from a trusted source.&lt;br /&gt;
** ie: &amp;quot;Nation of Banks, did your member TD send me a message to reset my password?&amp;quot; &lt;br /&gt;
&lt;br /&gt;
*There should be filters to ensure where the message is coming from.If the message is coming from unknown source , it should be blocked. &lt;br /&gt;
*Don&#039;t use the links in an email to get to any web page, if you suspect the message might not be authentic.&lt;br /&gt;
*Avoid filling out forms in email messages that ask for personal financial information. Phishers can make exact forms which you can find on financial institution.&lt;br /&gt;
*Make is so a machine needs to be authorized to use your information -- A machine that you don&#039;t own can&#039;t use your information to do anything, regardless of whether he has it or not.&lt;br /&gt;
*Ensure that any website that requires the filling of personal information be a secure website which can be traced to the original organisation.&lt;br /&gt;
*Ensure that whatever browser you are using is up to date with the most recent security patches applied.&lt;br /&gt;
*Obviously, report and suspected phishing to the appropriate authorities so that proper action can be taken&lt;br /&gt;
*&amp;quot;three strikes and you&#039;re out&amp;quot;&lt;br /&gt;
**Each machine is responsible for the massages it releases. When a machine is a repeat offender it loses access privileges&lt;br /&gt;
*Revamp the security login process to something similar to:&lt;br /&gt;
**User enters username and clicks next.&lt;br /&gt;
**Server returns a user predefined image to the User.&lt;br /&gt;
**If image is the right image then user enters password to logon.&lt;br /&gt;
&lt;br /&gt;
===3: Limiting the spread of malware===&lt;br /&gt;
Group members: keith, Andrew Luczak, David Barrera, Trevor Gelowsky, Scott Lyons&lt;br /&gt;
*(KM) Heterogenous systems - it is much easier to write code to attack a single type of system&lt;br /&gt;
*(KM) Individualized security policies&lt;br /&gt;
**(AL) A baseline security level would help prevent malware spreading to/from a system with &amp;quot;individual non-security&amp;quot;&lt;br /&gt;
*(KM) Identify all programs through digital signatures&lt;br /&gt;
*(KM) Peer rating system for programs, customize security policies based on peer ratings&lt;br /&gt;
**(SL) Need some way to keep rating system from being &amp;quot;gamed&amp;quot;&lt;br /&gt;
***(AL) Maybe a program gets flagged if it experiences a rapid approval increase?&lt;br /&gt;
**(AL) Need to protect against benign programs with good ratings being updated into malware&lt;br /&gt;
*(KM) System level forensics on program execution and resource/file modification&lt;br /&gt;
*(KM) Customizable user and program blacklists&lt;br /&gt;
*(SL) Sandboxing with breach management - know what files have been modified by a process&lt;br /&gt;
*(SL) Trending - what does the application spend most of its time doing?&lt;br /&gt;
&lt;br /&gt;
*(DB)Multiple control/chokepoints where malware is looked for. This way, it&#039;s more difficult for attackers to take over several control points and for malware to remain unnoticed. &lt;br /&gt;
*(DB)Heterogeneous systems help limit the spread of malware too. There&#039;s 2 points here. (1) If we&#039;re designing this system where we&#039;re all masters of our own domains, then we&#039;re likely to have different system configurations. However (2), if we want to communicate and interact with other domains, we need some standardized communication layer or mechanism. Standardization is very closely tied to homogeneous.&lt;br /&gt;
*(DB)There should be consequences if you harbor malware or if malware originates from within your domain. This could be and incentive to help people be more proactive in terms of security.&lt;br /&gt;
&lt;br /&gt;
===4: Bandwidth hogs===&lt;br /&gt;
Group members: Mike Preston, Fahim Rahman, Michael Du Plessis, Matthew Chou, Ahmad Yafawi&lt;br /&gt;
&lt;br /&gt;
*limit bandwidth for each user&lt;br /&gt;
*if user has significant bandwidth demands for a certain period of time&lt;br /&gt;
**add them to a watch list&lt;br /&gt;
**monitor their behaviour&lt;br /&gt;
**divert communication to other hosts that can satisfy requests.&lt;br /&gt;
***if there are no other hosts that can satisfy the request, then distribute data to other idle and capable hosts. Load is now reduced on the one link.&lt;br /&gt;
*QoS&lt;br /&gt;
*Tiered Bandwidth Distribution&lt;br /&gt;
**The main idea is you get more bandwidth to your machine as much as you give back to the community.&lt;br /&gt;
***It&#039;s similar to some trackers and dark net programs in which they wont increase your download speed unless you contribute X amount of Bytes back to your peers.&lt;br /&gt;
**Tier 1, Basic privileges i.e. all machines have minimal bandwidth.&lt;br /&gt;
**Tier n, we define some requirements to be met then we increase bandwidth accordingly.&lt;br /&gt;
***Drop a Tier if machine doesn&#039;t maintain the specified requirements of that specific tier.&lt;br /&gt;
***Advantage, monitoring bandwidth on the network is cheap while implementing what is stated above is not.&lt;br /&gt;
*As a metaphor to our &amp;quot;real world society&amp;quot;, bandwidth control can be treated as we do speed for cars.&lt;br /&gt;
**Certain areas need more free flowing traffic, so speed limits are increased.  Others require a slower pace which is enforced.  These &amp;quot;areas&amp;quot; can be translated to users or programs in our distributed OS model&lt;br /&gt;
**There are repercussions to breaking any of these imposed limits&lt;br /&gt;
**Throttling provides once possible implementation of these constraints&lt;br /&gt;
&lt;br /&gt;
====Bandwidth Hog Additional Sources and Information====&lt;br /&gt;
1. [http://repository.lib.ncsu.edu/ir/bitstream/1840.16/1197/1/etd.pdf A Solution to Bandwidth Hogs in a Cable Network]&lt;br /&gt;
*Starting at page 120 of this thesis is a proposed solution to bandwidth hogs on a cable network. In general, the proposal suggests a solution essentially equal to throttling however I did find the description of the solution to be helpful. I feel it may go well with our tiered suggestion if we were to keep the &amp;quot;earned trust&amp;quot; approach to bandwidth access but at the same time allow users in low congestion times to go above their tier. For example, if congestion is low, why not allow the people on the network to occupy much larger bandwidths. On the network include some form of monitoring protocol which can decide how much access a user is allowed. If more bandiwdth is available, let them have it if it is needed for their request. On the other hand, if congestion is high, the user will be capped at the upper limit of their bandwidth capacity if they are doing something that requires a large amount of bandwidth. In this manner each user will be guaranteed the amount they have earned at their tier, however if they do not want to earn a higher level for high usage timeframes they can instead opt to make use of low congestion timeframes and run their bandwidth heavy applications at that time. The network could also include live data regarding the current bandwidth usage levels as well as trending data so that people can plan when to start bandwidth heavy applications.&lt;br /&gt;
&lt;br /&gt;
2. [http://yuba.stanford.edu/rcp/flowCompTime-dukkipati.pdf Why Flow-Completion Time is the Right Metric for Congestion Control]&lt;br /&gt;
*This is a short article which raises an interesting question related to our topic, how should we determine what is considered &amp;quot;bandwidth hogging&amp;quot;. For example, do we look at the strain on the network in some capacity (i.e. dropped packets, usage level of the capacity of the pipe,etc.) which is important information for those who build the network; or do we make use of the time it takes for some transaction to occur when a user requests it? This article argues that from a user&#039;s point of view, they do not care how much bandwidth they get as long as the task they are requesting is completed as quickly as possible. In our discussion in class we had talked about how majority of people currently do not require large bandwidth needs for normal transactions ( email, web searching, wikis ;-) ), and a much smaller percentage of the population are the ones who actually eat up the larger bandwidth through hog-like applications. Maybe instead of focusing on the bandwidth as the main issue, we should think about how long it takes to complete tasks. Maybe our tiered system would also incorporate some aspect of this train of thought, i.e. people who only send email and surf the web are at tier one, people who use online storage and FTP are on level 2, people who stream movies and other data are at level 3, etc. Then, we could have each tier cost a separate amount and apply some form of control on the technologies available at each tier so that the restrictions of a tier are adhered to.&lt;br /&gt;
&lt;br /&gt;
3. [http://research.microsoft.com/en-us/people/asellen/pap0209-chetty.pdf Who’s Hogging The Bandwidth?: The Consequences Of Revealing The Invisible In The Home]&lt;br /&gt;
*This article is from Micrsoft reasearch and it is an interesting look into controlling bandwidth usage by providing people with a tool to monitor the usage and alter how bandwidth is allocated. This tool essentially boils down to the social control idea that we discussed in class. If you know that your neighbours are hogging the bandwidth for very low priority issues then should you not be able to appeal to their conscience in order to gain usage of resources you need? The article provides some examples of homes they provided this control to and how the household politcs factored into the usage of the bandwidth. When usage was no longer hidden it seems as though it became easier to openly discuss how to divide the finite amount of bandwidth. Initial concerns revolved around people just hogging the bandwidth for themselves or playing practical jokes on others in the house by reducing their usage when they were in the middle of some task. Another issue that this type of control brings up is how to prioritize what tasks are &amp;quot;more important&amp;quot;. One example given was if a Skype call to family and friends is more important than watching YouTube videos for a work related task. Interestingly the field studies provided some other examples of a &amp;quot;bandwidth etiqutte&amp;quot; that emerged. For example, it was considered very rude to limit somone&#039;s bandwidth when he/she was on a Skype call due to the immediate and negative effect but it was deemed acceptable to limit bandwidth during a file transfer as it just meant a few extra minutes for the transfer to complete.&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Help:Contents&amp;diff=7488</id>
		<title>Help:Contents</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Help:Contents&amp;diff=7488"/>
		<updated>2011-02-27T19:27:00Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: Created page with &amp;quot;I am here&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I am here&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W&amp;diff=7487</id>
		<title>DistOS-2011W</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W&amp;diff=7487"/>
		<updated>2011-02-27T19:24:32Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W&amp;diff=7486</id>
		<title>DistOS-2011W</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W&amp;diff=7486"/>
		<updated>2011-02-27T19:22:00Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Eucalyptus]][DistOS-2011W Eucalyptus | Eucalyptus]&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W&amp;diff=7485</id>
		<title>DistOS-2011W</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W&amp;diff=7485"/>
		<updated>2011-02-27T19:21:39Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[DistOS-2011W Eucalyptus | Eucalyptus]&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W&amp;diff=7484</id>
		<title>DistOS-2011W</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W&amp;diff=7484"/>
		<updated>2011-02-27T19:18:00Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[DistOS-2011W]&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W&amp;diff=7483</id>
		<title>DistOS-2011W</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W&amp;diff=7483"/>
		<updated>2011-02-27T19:15:26Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: Created page with &amp;quot;[DistOS-2011W Eucalyptus]&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[DistOS-2011W Eucalyptus]&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011&amp;diff=7133</id>
		<title>Distributed OS: Winter 2011</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011&amp;diff=7133"/>
		<updated>2011-01-20T18:29:26Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: /* 2: Stopping phishing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Evaluation==&lt;br /&gt;
&lt;br /&gt;
Grades in this class will be determined based on the following criteria.&lt;br /&gt;
&lt;br /&gt;
Undergraduate Students:&lt;br /&gt;
* 20% Class participation&lt;br /&gt;
* 20% Wiki participation&lt;br /&gt;
* 10% Group project oral presentation&lt;br /&gt;
* 30% Group project written report&lt;br /&gt;
* 20% Implementation report (Due late Feb.)&lt;br /&gt;
&lt;br /&gt;
Graduate Students:&lt;br /&gt;
* 15% Class participation&lt;br /&gt;
* 20% Wiki participation&lt;br /&gt;
* 10% Group project oral presentation&lt;br /&gt;
* 30% Group project written report&lt;br /&gt;
* 25% Literature review paper (Due late Feb.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Implementation report (undergrads)===&lt;br /&gt;
&lt;br /&gt;
An implementation report is a 5-10 page paper that either&lt;br /&gt;
# describes in detail one existing software system with distributed OS-like properties,&lt;br /&gt;
# compare and contrasts an important characteristic of 3 or more software systems with distributed OS-like properties, or&lt;br /&gt;
# reports on experiences setting up and using a software system with distributed OS-like properties.&lt;br /&gt;
Topics for an implementation report must be approved by Prof. Somayaji.&lt;br /&gt;
&lt;br /&gt;
===Literature review paper===&lt;br /&gt;
&lt;br /&gt;
==Readings==&lt;br /&gt;
&lt;br /&gt;
January 13, 2011:  [http://keys.ccrcentral.net/ccr/writing/ CCR]  (two papers)&lt;br /&gt;
&lt;br /&gt;
January 18, 2011:  [http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-25/oceanstore-sigplan.pdf OceanStore]  and [http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-25/fast2003-pond.pdf Pond]&lt;br /&gt;
&lt;br /&gt;
==Internet Governance==&lt;br /&gt;
&lt;br /&gt;
===Problems to Solve===&lt;br /&gt;
*Attack computers with almost no consequences&lt;br /&gt;
**DDoS&lt;br /&gt;
**botnets&lt;br /&gt;
**capture and analyze private traffic&lt;br /&gt;
**distribute malware&lt;br /&gt;
**tampering with traffic&lt;br /&gt;
**Unauthorized access to data and resources&lt;br /&gt;
**Impersonate computers, individuals, applications&lt;br /&gt;
**Fraud, theft&lt;br /&gt;
**regulate behavior&lt;br /&gt;
&lt;br /&gt;
===Design Principles===&lt;br /&gt;
*subjects of governance: programs and computers&lt;br /&gt;
*bind programs and computers to humans &amp;amp; human organizations, but recognize binding is imperfect&lt;br /&gt;
*recognize that &amp;quot;bad&amp;quot; behavior is always possible.  &amp;quot;good&amp;quot; behavior is enforced through incentives and sanctions.&lt;br /&gt;
*rules will change.  Even rules for rule changes will change. Need a &amp;quot;living document&amp;quot; governing how rules are chosen and enforced.&lt;br /&gt;
&lt;br /&gt;
==Scenarios==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===1: Stopping DDoS===&lt;br /&gt;
Group members: Seyyed, Andrew Schoenrock, Thomas McMahon, Lester Mundt, AbdelRahman, Rakhim Davletkaliyev&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Have the machine routing packets(could be ISP provider) detect suspicious packets, if the packets are signed, then those suspicious packets could be blocked, &lt;br /&gt;
the sender could be put on a black list.&lt;br /&gt;
&lt;br /&gt;
* (AS) Stopping DDoS against files, services, programs, etc&lt;br /&gt;
** (AS) Have file replication built into the system (similar to OceanStore) so that files are always available from different servers&lt;br /&gt;
** (AS) If files are not replicated then we could have a tiered messaging system (at the top level would be OS messages) and servers could then prioritize the incoming traffic. If a given server is experiencing an overload, it could send out a distress signal to its neighbours and then distribute what it is has to them. The system should have a built-in mechanism to re-balance the overall load after something like this happens. This would then mean that any DDoS attack would result in the service being more available.&lt;br /&gt;
*** I like this idea of having service fallover&lt;br /&gt;
*** Expanding on the idea of file replication and sending distress signals to it&#039;s neighbours, I could envision a group of servers that would learn to help each other out.  Lending processing and storage when they are under utilized.  The would sort of form a collective, club or gang.  Members who didn&#039;t contribute ( always fully utilized ) would eventually be identified and banned.  It would be these other computers that the targeted server would rely on for help in this situation. However cool this is it isn&#039; really a solution because one could suppose the attackers might utilize the same strategy to recruit additional help in there attack. &lt;br /&gt;
&lt;br /&gt;
* (AS) Stopping DDoS against specific machines&lt;br /&gt;
** (AS) I don&#039;t think that this should be specifically addressed. I think measures introduced to guard against this will ultimately negatively impact the overall system in terms of performance.&lt;br /&gt;
*** I don&#039;t like the idea of sacrificing the one for the many though.&lt;br /&gt;
**** (AS) The main thing with what I&#039;ve proposed is that the motivation behind doing a DDoS attack is completely gone (by doing one a service would either maintain or increase its overall availability). I think by eliminating the main result of a DDoS attack would mean that there would be no reason to guard against DDoS attacks on a specific machine.&lt;br /&gt;
&lt;br /&gt;
*Stopping DDoS&lt;br /&gt;
** Many of the DDoS attacks utilize the property of anonymity.  These services serve anyone who requests there service.  Many DDoS attacks then ensure sufficient traffic that the computer behind the service can no longer cope.  If we remove anonymity and only serve &#039;known&#039; parties the spurious requests would be ignored.   So we need to &#039;know&#039; who our friends are.&lt;br /&gt;
*** This of course requires a form of unspoofable authentication unlike IP. &lt;br /&gt;
**** Serving only &#039;known&#039; parties reduces the distribution of information, or at least its rate. I was thinking of removing anonymity on a lower level, so that any party that&#039;s not anonymous while sending a packet to your machine is considered &#039;known&#039;, and anything unknown (unsigned, unrepresented in some way) is blocked. So, we don&#039;t really need to &#039;know&#039; who our friends are, we just need to know who aren&#039;t. &lt;br /&gt;
**** Another thing I had in mind is punishment in case a &#039;known&#039; party participates in DDoS-attack: not punishing the owner of that machine (who probably is a victim as well), but the software or hardware in some sense. &lt;br /&gt;
&lt;br /&gt;
*Stopping DDoS&lt;br /&gt;
**How about developing such a network topology and protocols that make DDoS attacks less efficient or harder to perform? Some sort of CAPTCHA, but for machines and protocols, to distinguish them from bots, maybe? &lt;br /&gt;
&lt;br /&gt;
*Stopping DDoS&lt;br /&gt;
** I&#039;m not sure what it means by stopping, I don&#039;t think we can stop DDos given the way things are currently ran, we can only block it. From my knowledge most softwares that stop DDoS do so by blocking, or even complete shut down like Mccolo.&lt;br /&gt;
&lt;br /&gt;
*Stopping DDos&lt;br /&gt;
**One method is to use the same way of eliminating DoS by rejecting a specific rate of subsequent requests but from irrelevant sources.&lt;br /&gt;
&lt;br /&gt;
*How we could stop DDoS would be to have each connection to the internet assigned to a particular identity. This identity would be used to verify who is attempting connections. The reason DDoS works is because currently, IP addresses can be spoofed. The only way to verify an identity is to request a response, but by then the damage is done. With a verified identity, connection attempts being routed can be verified during transmission, so that the request may not necessarily even reach the destination host.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Basically, we need some encryption system using keys so that as the packets are being routed, the identity of the packet&#039;s sender can be verified. Ideally the decryption would be trivial so as to prevent noticeable latency. Because an identity is verified, if there is spoofing of packets, they would be dropped during the routing. If all the identities are verified and are still attempting a DDoS attack, the attacker&#039;s identity will be traced back to the attacker.&lt;br /&gt;
&lt;br /&gt;
(I think we&#039;re not looking low enough. We&#039;re trying to find a solution for this problem assuming the system that made that problem possible is still unchanged. We enforce more security by identification, encryption, etc, but the system is still problem-prone. This will allow to identify an attacker, but after the attack was started (or even finished). It&#039;s like trying to eliminate theft from a society of poor, unemployed, uneducated people by enforcing more security and punishment. Which will help to reduce the rate and motivation, but can&#039;t stop the possible attack. It is pretty stupid analogy, but rather than policing that society, I want to make them rich, employed and educated, so that thefts are just not efficient way of getting goods for them. So, rather than protecting machines from attacks, I want to make the system where DDoS-attacks are just inappropriate.)&lt;br /&gt;
&lt;br /&gt;
===2: Stopping phishing===&lt;br /&gt;
Group members: Waheed Ahmed, Nicolas Lessard, Raghad Al-Awwad, Tarjit Komal&lt;br /&gt;
&lt;br /&gt;
* A way of automatically checking the signature of a message to make sure it really is from a trusted source.&lt;br /&gt;
** ie: &amp;quot;Nation of Banks, did your member TD send me a message to reset my password?&amp;quot; &lt;br /&gt;
&lt;br /&gt;
*There should be filters to ensure where the message is coming from.If the message is coming from unknown source , it should be blocked. &lt;br /&gt;
*Don&#039;t use the links in an email to get to any web page, if you suspect the message might not be authentic.&lt;br /&gt;
*Avoid filling out forms in email messages that ask for personal financial information. Phishers can make exact forms which you can find on financial institution.&lt;br /&gt;
*Make is so a machine needs to be authorized to use your information -- A machine that you don&#039;t own can&#039;t use your information to do anything, regardless of whether he has it or not.&lt;br /&gt;
*Ensure that any website that requires the filling of personal information be a secure website which can be traced to the original organisation.&lt;br /&gt;
*Ensure that whatever browser you are using is up to date with the most recent security patches applied.&lt;br /&gt;
*Obviously, report and suspected phishing to the appropriate authorities so that proper action can be taken&lt;br /&gt;
*&amp;quot;three strikes and you&#039;re out&amp;quot;&lt;br /&gt;
**Each machine is responsible for the massages it releases. When a machine is a repeat offender it loses access privileges&lt;br /&gt;
*Revamp the security login process to something similar to:&lt;br /&gt;
**User enters username and clicks next.&lt;br /&gt;
**Server returns a user predefined image to the User.&lt;br /&gt;
**If image is the right image then user enters password to logon.&lt;br /&gt;
&lt;br /&gt;
===3: Limiting the spread of malware===&lt;br /&gt;
Group members: keith, Andrew Luczak, David Barrera, Trevor Gelowsky, Scott Lyons&lt;br /&gt;
*(KM) Heterogenous systems - it is much easier to write code to attack a single type of system&lt;br /&gt;
*(KM) Individualized security policies&lt;br /&gt;
**(AL) A baseline security level would help prevent malware spreading to/from a system with &amp;quot;individual non-security&amp;quot;&lt;br /&gt;
*(KM) Identify all programs through digital signatures&lt;br /&gt;
*(KM) Peer rating system for programs, customize security policies based on peer ratings&lt;br /&gt;
**(SL) Need some way to keep rating system from being &amp;quot;gamed&amp;quot;&lt;br /&gt;
***(AL) Maybe a program gets flagged if it experiences a rapid approval increase?&lt;br /&gt;
**(AL) Need to protect against benign programs with good ratings being updated into malware&lt;br /&gt;
*(KM) System level forensics on program execution and resource/file modification&lt;br /&gt;
*(KM) Customizable user and program blacklists&lt;br /&gt;
*(SL) Sandboxing with breach management - know what files have been modified by a process&lt;br /&gt;
*(SL) Trending - what does the application spend most of its time doing?&lt;br /&gt;
&lt;br /&gt;
*(DB)Multiple control/chokepoints where malware is looked for. This way, it&#039;s more difficult for attackers to take over several control points and for malware to remain unnoticed. &lt;br /&gt;
*(DB)Heterogeneous systems help limit the spread of malware too. There&#039;s 2 points here. (1) If we&#039;re designing this system where we&#039;re all masters of our own domains, then we&#039;re likely to have different system configurations. However (2), if we want to communicate and interact with other domains, we need some standardized communication layer or mechanism. Standardization is very closely tied to homogeneous.&lt;br /&gt;
*(DB)There should be consequences if you harbor malware or if malware originates from within your domain. This could be and incentive to help people be more proactive in terms of security.&lt;br /&gt;
&lt;br /&gt;
===4: Bandwidth hogs===&lt;br /&gt;
Group members: Mike Preston, Fahim Rahman, Michael Du Plessis, Matthew Chou, Ahmad Yafawi&lt;br /&gt;
&lt;br /&gt;
*bandwidth management/scheduling (similar to OS scheduling)&lt;br /&gt;
**utilizing a round robin schedule to allow for periodic increases in bandwidth per user&lt;br /&gt;
**priority system that allows for more critical operations being done by a user to take precedence over others&lt;br /&gt;
*have the bandwidth separated evenly across all users and allow for users to donate their bandwidth amount for others to use, but can revoke it at any time&lt;br /&gt;
&lt;br /&gt;
*Tiered Bandwidth Distribution&lt;br /&gt;
**The main idea is you get more bandwidth to your machine as much as you give back to the community.&lt;br /&gt;
***It&#039;s similar to some trackers and dark net programs in which they wont increase your download speed unless you contribute X amount of Bytes back to your peers.&lt;br /&gt;
**Tier 1, Basic privileges i.e. all machines have minimal bandwidth.&lt;br /&gt;
**Tier n, we define some requirements to be met then we increase bandwidth accordingly.&lt;br /&gt;
***Drop a Tier if machine doesn&#039;t maintain the specified requirements of that specific tier.&lt;br /&gt;
&lt;br /&gt;
***Advantage, monitoring bandwidth on the network is cheap while implementing what is stated above is not.&lt;br /&gt;
*As a metaphor to our &amp;quot;real world society&amp;quot;, bandwidth control can be treated as we do speed for cars.&lt;br /&gt;
**Certain areas need more free flowing traffic, so speed limits are increased.  Others require a slower pace which is enforced.  These &amp;quot;areas&amp;quot; can be translated to users or programs in our distributed OS model&lt;br /&gt;
**There are repercussions to breaking any of these imposed limits&lt;br /&gt;
**Throttling provides once possible implementation of these constraints&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011&amp;diff=7121</id>
		<title>Distributed OS: Winter 2011</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Distributed_OS:_Winter_2011&amp;diff=7121"/>
		<updated>2011-01-20T16:52:57Z</updated>

		<summary type="html">&lt;p&gt;Tkomal: /* 2: Stopping phishing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Readings==&lt;br /&gt;
&lt;br /&gt;
January 13, 2011:  [http://keys.ccrcentral.net/ccr/writing/ CCR]  (two papers)&lt;br /&gt;
&lt;br /&gt;
January 18, 2011:  [http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-25/oceanstore-sigplan.pdf OceanStore]  and [http://homeostasis.scs.carleton.ca/~soma/distos/2008-02-25/fast2003-pond.pdf Pond]&lt;br /&gt;
&lt;br /&gt;
==Internet Governance==&lt;br /&gt;
&lt;br /&gt;
===Problems to Solve===&lt;br /&gt;
*Attack computers with almost no consequences&lt;br /&gt;
**DDoS&lt;br /&gt;
**botnets&lt;br /&gt;
**capture and analyze private traffic&lt;br /&gt;
**distribute malware&lt;br /&gt;
**tampering with traffic&lt;br /&gt;
**Unauthorized access to data and resources&lt;br /&gt;
**Impersonate computers, individuals, applications&lt;br /&gt;
**Fraud, theft&lt;br /&gt;
**regulate behavior&lt;br /&gt;
&lt;br /&gt;
===Design Principles===&lt;br /&gt;
*subjects of governance: programs and computers&lt;br /&gt;
*bind programs and computers to humans &amp;amp; human organizations, but recognize binding is imperfect&lt;br /&gt;
*recognize that &amp;quot;bad&amp;quot; behavior is always possible.  &amp;quot;good&amp;quot; behavior is enforced through incentives and sanctions.&lt;br /&gt;
*rules will change.  Even rules for rule changes will change. Need a &amp;quot;living document&amp;quot; governing how rules are chosen and enforced.&lt;br /&gt;
&lt;br /&gt;
==Scenarios==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===1: Stopping DDoS===&lt;br /&gt;
Group members: Seyyed, Andrew Schoenrock, Thomas McMahon, Lester Mundt, AbdelRahman, Rakhim Davletkaliyev&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Have the machine routing packets(could be ISP provider) detect suspicious packets, if the packets are signed, then those suspicious packets could be blocked, &lt;br /&gt;
the sender could be put on a black list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* (AS) Stopping DDoS against files, services, programs, etc&lt;br /&gt;
** (AS) Have file replication built into the system (similar to OceanStore) so that files are always available from different servers&lt;br /&gt;
** (AS) If files are not replicated then we could have a tiered messaging system (at the top level would be OS messages) and servers could then prioritize the incoming traffic. If a given server is experiencing an overload, it could send out a distress signal to its neighbours and then distribute what it is has to them. The system should have a built-in mechanism to re-balance the overall load after something like this happens. This would then mean that any DDoS attack would result in the service being more available.&lt;br /&gt;
*** I like this idea of having service fallover&lt;br /&gt;
*** Expanding on the idea of file replication and sending distress signals to it&#039;s neighbours, I could envision a group of servers that would learn to help each other out.  Lending processing and storage when they are under utilized.  The would sort of form a collective, club or gang.  Members who didn&#039;t contribute ( always fully utilized ) would eventually be identified and banned.  It would be these other computers that the targeted server would rely on for help in this situation. However cool this is it isn&#039; really a solution because one could suppose the attackers might utilize the same strategy to recruit additional help in there attack. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* (AS) Stopping DDoS against specific machines&lt;br /&gt;
** (AS) I don&#039;t think that this should be specifically addressed. I think measures introduced to guard against this will ultimately negatively impact the overall system in terms of performance.&lt;br /&gt;
*** I don&#039;t like the idea of sacrificing the one for the many though.&lt;br /&gt;
**** (AS) The main thing with what I&#039;ve proposed is that the motivation behind doing a DDoS attack is completely gone (by doing one a service would either maintain or increase its overall availability). I think by eliminating the main result of a DDoS attack would mean that there would be no reason to guard against DDoS attacks on a specific machine.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Stopping DDoS&lt;br /&gt;
** Many of the DDoS attacks utilize the property of anonymity.  These services serve anyone who requests there service.  Many DDoS attacks then ensure sufficient traffic that the computer behind the service can no longer cope.  If we remove anonymity and only serve &#039;known&#039; parties the spurious requests would be ignored.   So we need to &#039;know&#039; who our friends are.&lt;br /&gt;
*** This of course requires a form of unspoofable authentication unlike IP. &lt;br /&gt;
**** Serving only &#039;known&#039; parties reduces the distribution of information, or at least its rate. I was thinking of removing anonymity on a lower level, so that any party that&#039;s not anonymous while sending a packet to your machine is considered &#039;known&#039;, and anything unknown (unsigned, unrepresented in some way) is blocked. So, we don&#039;t really need to &#039;know&#039; who our friends are, we just need to know who aren&#039;t. &lt;br /&gt;
&lt;br /&gt;
**** Another thing I had in mind is punishment in case a &#039;known&#039; party participates in DDoS-attack: not punishing the owner of that machine (who probably is a victim as well), but the software or hardware in some sense. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Stopping DDoS&lt;br /&gt;
**How about developing such a network topology and protocols that make DDoS attacks less efficient or harder to perform? Some sort of CAPTCHA, but for machines and protocols, to distinguish them from bots, maybe? &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Stopping DDoS&lt;br /&gt;
** I&#039;m not sure what it means by stopping, I don&#039;t think we can stop DDos given the way things are currently ran, we can only block it. From my knowledge most softwares that stop DDoS do so by blocking, or even complete shut down like Mccolo.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Stopping DDos&lt;br /&gt;
**One method is to use the same way of eliminating DoS by rejecting a specific rate of subsequent requests but from irrelevant sources.&lt;br /&gt;
&lt;br /&gt;
*How we could stop DDoS would be to have each connection to the internet assigned to a particular identity. This identity would be used to verify who is attempting connections. The reason DDoS works is because currently, IP addresses can be spoofed. The only way to verify an identity is to request a response, but by then the damage is done. With a verified identity, connection attempts being routed can be verified during transmission, so that the request may not necessarily even reach the destination host.&lt;br /&gt;
&lt;br /&gt;
Basically, we need some encryption system using keys so that as the packets are being routed, the identity of the packet&#039;s sender can be verified. Ideally the decryption would be trivial so as to prevent noticeable latency. Because an identity is verified, if there is spoofing of packets, they would be dropped during the routing. If all the identities are verified and are still attempting a DDoS attack, the attacker&#039;s identity will be traced back to the attacker.&lt;br /&gt;
&lt;br /&gt;
(I think we&#039;re not looking low enough. We&#039;re trying to find a solution for this problem assuming the system that made that problem possible is still unchanged. We enforce more security by identification, encryption, etc, but the system is still problem-prone. This will allow to identify an attacker, but after the attack was started (or even finished). It&#039;s like trying to eliminate theft from a society of poor, unemployed, uneducated people by enforcing more security and punishment. Which will help to reduce the rate and motivation, but can&#039;t stop the possible attack. It is pretty stupid analogy, but rather than policing that society, I want to make them rich, employed and educated, so that thefts are just not efficient way of getting goods for them. So, rather than protecting machines from attacks, I want to make the system where DDoS-attacks are just inappropriate.)&lt;br /&gt;
&lt;br /&gt;
===2: Stopping phishing===&lt;br /&gt;
Group members: Waheed Ahmed, Nicolas Lessard, Raghad Al-Awwad, Tarjit Komal&lt;br /&gt;
&lt;br /&gt;
* A way of automatically checking the signature of a message to make sure it really is from a trusted source.&lt;br /&gt;
** ie: &amp;quot;Nation of Banks, did your member TD send me a message to reset my password?&amp;quot; &lt;br /&gt;
&lt;br /&gt;
*There should be filters to ensure where the message is coming from.If the message is coming from unknown source , it should be blocked. &lt;br /&gt;
*Don&#039;t use the links in an email to get to any web page, if you suspect the message might not be authentic.&lt;br /&gt;
*Avoid filling out forms in email messages that ask for personal financial information. Phishers can make exact forms which you can find on financial institution.&lt;br /&gt;
*Make is so a machine needs to be authorized to use your information -- A machine that you don&#039;t own can&#039;t use your information to do anything, regardless of whether he has it or not.&lt;br /&gt;
*Ensure that any website that requires the filling of personal information be a secure website which can be traced to the original organisation.&lt;br /&gt;
*Ensure that whatever browser you are using is up to date with the most recent security patches applied.&lt;br /&gt;
*Obviously, report and suspected phishing to the appropriate authorities so that proper action can be taken&lt;br /&gt;
&lt;br /&gt;
===3: Limiting the spread of malware===&lt;br /&gt;
Group members: keith, Andrew Luczak, David Barrera, Trevor Gelowsky, Scott Lyons&lt;br /&gt;
*(KM) Heterogenous systems - it is much easier to write code to attack a single type of system&lt;br /&gt;
*(KM) Individualized security policies&lt;br /&gt;
**(AL) A baseline security level would help prevent malware spreading to/from a system with &amp;quot;individual non-security&amp;quot;&lt;br /&gt;
*(KM) Identify all programs through digital signatures&lt;br /&gt;
*(KM) Peer rating system for programs, customize security policies based on peer ratings&lt;br /&gt;
**(SL) Need some way to keep rating system from being &amp;quot;gamed&amp;quot;&lt;br /&gt;
***(AL) Maybe a program gets flagged if it experiences a rapid approval increase?&lt;br /&gt;
**(AL) Need to protect against benign programs with good ratings being updated into malware&lt;br /&gt;
*(KM) System level forensics on program execution and resource/file modification&lt;br /&gt;
*(KM) Customizable user and program blacklists&lt;br /&gt;
*(SL) Sandboxing with breach management - know what files have been modified by a process&lt;br /&gt;
*(SL) Trending - what does the application spend most of its time doing?&lt;br /&gt;
&lt;br /&gt;
===4: Bandwidth hogs===&lt;br /&gt;
Group members: Mike Preston, Fahim Rahman, Michael Du Plessis, Matthew Chou, Ahmad Yafawi&lt;br /&gt;
&lt;br /&gt;
*limit bandwidth for each user&lt;br /&gt;
*if user has significant bandwidth demands for a certain period of time&lt;br /&gt;
**add them to a watch list&lt;br /&gt;
**monitor their behaviour&lt;br /&gt;
**divert communication to other hosts that can satisfy requests.&lt;br /&gt;
***if there are no other hosts that can satisfy the request, then distribute data to other idle and capable hosts. Load is now reduced on the one link.&lt;br /&gt;
*QoS&lt;br /&gt;
*bandwidth management/scheduling (similar to OS scheduling)&lt;br /&gt;
**utilizing a round robin schedule to allow for periodic increases in bandwidth per user&lt;br /&gt;
**priority system that allows for more critical operations being done by a user to take precedence over others&lt;br /&gt;
*have the bandwidth separated evenly across all users and allow for users to donate their bandwidth amount for others to use, but can revoke it at any time&lt;br /&gt;
* Tiered Bandwidth Distribution&lt;br /&gt;
** The main idea is you get more bandwidth to your machine as much as you give back to the community. Its similar to some trackers and dark net programs in which they wont increase your download speed unless you contribute X amount of Bytes back to your peers.&lt;br /&gt;
** Tier 1, Basic privileges i.e. all machines have minimal bandwidth.&lt;br /&gt;
** Tier n, we define some requirements to be met then we increase bandwidth accordingly.&lt;br /&gt;
*** Drop a Tier if machine doesn&#039;t maintain the specified requirements of that specific tier.&lt;br /&gt;
*** Adv, monitoring bandwidth on the network is cheap while implementing what is stated above is not.&lt;br /&gt;
*As a metaphor to our &amp;quot;real world society,&amp;quot; bandwidth control can be treated as we do speed for cars.&lt;br /&gt;
**Certain areas need more free flowing traffic, so speed limits are increased.  Others require a slower pace which is enforced.  These &amp;quot;areas&amp;quot; can be translated to users or programs in our distributed OS model&lt;br /&gt;
**Throttling provides once possible implementation of these constraints&lt;/div&gt;</summary>
		<author><name>Tkomal</name></author>
	</entry>
</feed>