<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dbarrera</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dbarrera"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Dbarrera"/>
	<updated>2026-05-01T17:13:48Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9458</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9458"/>
		<updated>2011-04-12T00:50:00Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Case 3: Phishing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
[[https://docs.google.com/present/edit?id=0AQJ2IGOeo68XZGhuNnJ0YjRfM2doZDg3Ymc5&amp;amp;hl=en&amp;amp;authkey=CK7Mk4YO Presentation]]&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four malicious acts; comment spam, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes the above discussion involving justice, and applies it to the management of a computer network. The implementation is designed to be incrementally-deployable, so that it would be realistic for a network to use the proposed system. The implementation is entitled the “Justice Web”. &lt;br /&gt;
&lt;br /&gt;
The purpose of the Justice Web is to protect public-facing services from attacks coming from outside the network. This is accomplished by keeping a record of the criminal acts made by connections, and allowing the services access to these records. Criminal acts in this case are actions done by a connection that is considered harmful to the network. The record kept by the network is a “Morality Rating”, which is an integer meant to reflect the severity of the crime committed.&lt;br /&gt;
&lt;br /&gt;
===Assumptions===&lt;br /&gt;
&lt;br /&gt;
Certain assumptions must be made regarding the other class projects in order for this implementation to be deployable. Most importantly, it is assumed that there is some way in which the network can uniquely identify a computer that connects to the network. This allows the Justice Web to keep a criminal log of clients, and recognize if an offender is attempting to connect.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
&lt;br /&gt;
Morality Rating (MR) is an integer assigned to computers that have connected to a service within the Justice Web. The purpose of the MR is to keep track of a computer’s past offenses, and allow services to restrict access using thresholds. For instance, a service within the Justice Web could restrict access to those above -100 MR.&lt;br /&gt;
&lt;br /&gt;
While the primary purpose of the Justice Web is to protect against attackers from outside the network, every node in the Justice Web is assigned an MR, which increases and decreases based on their actions within the network. Ideally, those with higher MR are allowed access to more shared resources, though this would be implementation specific.&lt;br /&gt;
&lt;br /&gt;
The MR assigned to a computer is local to the Justice Web that assigned the rating. For example, if two separate networks deploy a Justice Web, the ratings they assign do not affect the other network’s ratings.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
In order to assign MR to offenders, an authority figure is needed to declare if a crime has been committed. In the Justice Web, this role is taken by the Judges, who may be one or more computers within the network. It is the Judges’ responsibility to create the rules of the network, gather the evidence when a claim is made, declare if a crime has been committed, and assign a new MR based on the ruling.&lt;br /&gt;
&lt;br /&gt;
How a Judge is picked isn’t set in stone, but in general it would be the node(s) in the network with the highest MR. Alternatively, the Judges could be picked through some democratic process.&lt;br /&gt;
&lt;br /&gt;
The judgments made are mostly automated, based on the rules of the network. However, it can be specified that certain crimes, such as a claim of a phishing scam being committed, be dealt with by a human.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The Justice Web is a virtual network, in that the nodes are not necessarily connected or even anywhere near each other. Because of this, it would be inconvenient and potentially harmful to have services look up a computer’s MR on every connection attempt. To prevent this, MR will be stored in a central location, but propagated throughout the network.&lt;br /&gt;
&lt;br /&gt;
This is done using a master-slave approach to database replication. The Judges of the network store the “Master List”, and propagate the data to the “Slave Lists” stored by the services within the network. The records stored by the Slave Lists is decided by the thresholds that the specific service has put in place. As mentioned in the Morality Rating subsection, a service can set thresholds to determine if a computer should be allowed access. In the example, an MR of -100 would be blocked from the service. If a service were to have only this threshold in place, it would only need to be aware of computers with -100 MR, and so would only store that data in its Slave List. Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Judges define and use rules to determine whether a crime has been committed. A rule consists of three parts: The offense, the proof needed, and the severity of the punishment. The offense is a name assigned to the crime, which services can claim has been committed. The proof is the required information for the judges to be able to make a conviction. The severity of the punishment is an integer value to negate from the offender’s current MR.&lt;br /&gt;
&lt;br /&gt;
Each network deploying a Justice Web specifies their own set of rules. These rules are made available to the public so that services within the network are aware of the crimes they can report. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence===&lt;br /&gt;
&lt;br /&gt;
Evidence is used by the Justice Web to determine if a crime has been committed. Evidence is stored in encrypted logs located on a service’s computer, and submitted to the judges when a claim is made.&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
The type of evidence required is varied, and is defined by the Judges of a network. For a DDoS attack, the Justice Web would potential be able to look at the evidence logs and determine which computers were actively involved in the attack, and which was legitimate traffic through the analysis of statistical evidence[17].&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
Membership of a Justice Web would be primarily public-facing services seeking protection from attacks. However, because there is the capability of sharing resources based on a node’s MR, there is reason for computers to join the network simply for accessing to these resources.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Where should the master morality list be stored?&#039;&#039;&#039; - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;How are judges elected?&#039;&#039;&#039; - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exist. For example, the United Nations (UN) and the North Atlantic Treaty Organization (NATO) are organizations where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation such as the Justice Web, where hosts opt-in. This section briefly discuss a different approach to the Justice Web that attempts to deal with some of the restrictions mentioned above, at the expense of losing incremental deployability.&lt;br /&gt;
&lt;br /&gt;
=== Morality Rating ===&lt;br /&gt;
&lt;br /&gt;
The global implementation still requires the existence of a morality rating, but in a global setting, we require that all hosts have a morality rating built-in. By having each host store its own morality rating, we obsolete the concept of a &amp;quot;master list&amp;quot; or a &amp;quot;slave list&amp;quot; or morality ratings. The obvious requirement for a built-in morality rating is that the host itself should not be able to arbitrarily modify the value. One possible mechanism could be the use of a Trusted Platform (http://www.trustedcomputinggroup.org/developers/ TPM]) which allows encryption and decryption of data, but noes not allow the extraction of the private encryption key. Indeed, storing the morality rating within hosts rather than on external lists alleviates the need for distributed storage and allows better scalability, but also requires all hosts to be compliant with the mechanism. &lt;br /&gt;
&lt;br /&gt;
=== Connection management ===&lt;br /&gt;
&lt;br /&gt;
Due to the modified morality rating storage, there is no longer the need to look-up the morality rating of a host upon incoming connections. We therefore need a way to transmit the morality rating on each outgoing connection, so that the destination host (i.e., the server) can decide whether or not to allow the connection. A change of this type would mean changing underlying networking protocols to include a new field (the morality rating). If morality ratings are stored locally and transmitted as part of the network protocol, there would be far less overhead than in the Justice Web. &lt;br /&gt;
&lt;br /&gt;
=== Rules and Judges ===&lt;br /&gt;
&lt;br /&gt;
Similar to the Justice Web, there would need to be a standard set of rules that all hosts agree to. In the global implementation, agreeing upon a standard set of rules might prove to be difficult, since not all hosts/users at the global level have the same views on justice. The problem of judge election also becomes difficult at a global level. We leave this problem to future research. &lt;br /&gt;
&lt;br /&gt;
In summary, the global implementation could offer the same benefits as the Justice Web with much less overhead, but would require a full reboot of the Internet as well a new hardware, making it a realistically unlikely solution.&lt;br /&gt;
&lt;br /&gt;
==Use Cases==&lt;br /&gt;
&lt;br /&gt;
This section reviews three common attacks and describes how the computer-based justice system would deal with them &lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam===&lt;br /&gt;
The first deviant act we investigate is comment spam. This type of spam is typically generated by automated scripts which insert comments on blogs or other sites. Posted comments will generally contain links to other websites which attempt to sell a product or trick the user into revealing banking credentials. Although usually annoying, these comments can direct users to locations where malicious code may be downloaded, even if the original site hosting the comment was initially trusted. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Evidence collected.&#039;&#039;&#039; The comment being reported as spam as well as the website hosting the comment (forum, blog, etc.) The ID of the commenter is also collected, assuming we have a unique identifier for each commenting host. &lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&#039;&#039;&#039;Local implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Users report comment spam&lt;br /&gt;
*The morality of the offending host is adjusted if the evidence is found to incriminate the host. &lt;br /&gt;
*Based on the new morality rating, the offending host may not be allowed to post to the site depending on the restrictions of the hosting server&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Global implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Same method for reporting the comment spam, and for adjusting the morality rating as the local implementation above. &lt;br /&gt;
*If a host has a sufficiently low morality rating, the host site will disable the ability for the offending host to communicate with the site at all.&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is accessed by a large number of hosts, or a small number of hosts with high frequency. Services under a denial of service (DoS) or distributed denial of service (DDoS) attack are no longer able to serve legitimate requests [14].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Evidence collected.&#039;&#039;&#039; ID of any host connecting to the victim server for the duration of the attack.&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&#039;&#039;&#039;Local implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*The morality of each user is looked up to see if the request should be managed. This may cause even greater load on the host.&lt;br /&gt;
*Once it has been established that participants of the attack have an unacceptable morality rating, they are blocked from communication with the site.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Global implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Since morality rating is passed in with communication, requests could be filtered out (i.e. at a firewall level).&lt;br /&gt;
*Any incoming communication with a bad enough morality would simply be ignored.&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a malicious site pretends to be a legitimate site, tricking users into revealing banking or personal information.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Evidence collected.&#039;&#039;&#039; The fraudulent site URL and the legitimate site URL&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Local implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Users report phishing site.&lt;br /&gt;
*Based on the morality of the host of a phishing site, it may be removed from the network.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Global implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Same method of reporting and morality adjustment as the local implementation above.&lt;br /&gt;
*Removal from the network is not really possible, but the client can read the server&#039;s morality rating upon connecting&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Applying justice to a distributed system requires an understanding of how society runs in a teleologic or retributive method of punishments, as well as knowing the range between purposely and negligently participating in such an act. Discussions of punishment and intent brought up another social construct that exists in society - morality. When looking at a single computer, it is hard for us to consider that that computer had &amp;quot;intended&amp;quot; on doing something, or even that it had felt badly if we made it do a bunch or repetitive operations as a form of punishment. Even though implementing emotions and a care for self preservation is difficult for a computer, we can at least apply a morality value to each computer node, so that it may be judged by any individual that plans on communicating/interacting with that node. By discussing specific cases in which a justice system would take part in a distributed system, we can conceptualize a basis upon which a future implementation of justice on computers might be possible. Given the advantages and the disadvantages of implementing such a system on a local and global scale, it is evident that there requires a more in depth look into how some technical aspects, as well as the assumptions to be supported by the other factors on the internet(attribution, reputation, contracts), must be upheld in order to seek the means to fight injustice, and to turn fear against those who prey on the fearful, as malicious users do to users who do not have protection - this is what the justice web is for.&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;br /&gt;
&lt;br /&gt;
[17] S. Yu, W. Zhou, R. Doss, &#039;&#039;Information theory based detection against network behavior mimicking DDoS attacks&#039;&#039;, IEEE, April 2008, [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1]. Last visited April 2011.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9457</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9457"/>
		<updated>2011-04-12T00:47:51Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Case 2: Denial of Service */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
[[https://docs.google.com/present/edit?id=0AQJ2IGOeo68XZGhuNnJ0YjRfM2doZDg3Ymc5&amp;amp;hl=en&amp;amp;authkey=CK7Mk4YO Presentation]]&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four malicious acts; comment spam, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes the above discussion involving justice, and applies it to the management of a computer network. The implementation is designed to be incrementally-deployable, so that it would be realistic for a network to use the proposed system. The implementation is entitled the “Justice Web”. &lt;br /&gt;
&lt;br /&gt;
The purpose of the Justice Web is to protect public-facing services from attacks coming from outside the network. This is accomplished by keeping a record of the criminal acts made by connections, and allowing the services access to these records. Criminal acts in this case are actions done by a connection that is considered harmful to the network. The record kept by the network is a “Morality Rating”, which is an integer meant to reflect the severity of the crime committed.&lt;br /&gt;
&lt;br /&gt;
===Assumptions===&lt;br /&gt;
&lt;br /&gt;
Certain assumptions must be made regarding the other class projects in order for this implementation to be deployable. Most importantly, it is assumed that there is some way in which the network can uniquely identify a computer that connects to the network. This allows the Justice Web to keep a criminal log of clients, and recognize if an offender is attempting to connect.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
&lt;br /&gt;
Morality Rating (MR) is an integer assigned to computers that have connected to a service within the Justice Web. The purpose of the MR is to keep track of a computer’s past offenses, and allow services to restrict access using thresholds. For instance, a service within the Justice Web could restrict access to those above -100 MR.&lt;br /&gt;
&lt;br /&gt;
While the primary purpose of the Justice Web is to protect against attackers from outside the network, every node in the Justice Web is assigned an MR, which increases and decreases based on their actions within the network. Ideally, those with higher MR are allowed access to more shared resources, though this would be implementation specific.&lt;br /&gt;
&lt;br /&gt;
The MR assigned to a computer is local to the Justice Web that assigned the rating. For example, if two separate networks deploy a Justice Web, the ratings they assign do not affect the other network’s ratings.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
In order to assign MR to offenders, an authority figure is needed to declare if a crime has been committed. In the Justice Web, this role is taken by the Judges, who may be one or more computers within the network. It is the Judges’ responsibility to create the rules of the network, gather the evidence when a claim is made, declare if a crime has been committed, and assign a new MR based on the ruling.&lt;br /&gt;
&lt;br /&gt;
How a Judge is picked isn’t set in stone, but in general it would be the node(s) in the network with the highest MR. Alternatively, the Judges could be picked through some democratic process.&lt;br /&gt;
&lt;br /&gt;
The judgments made are mostly automated, based on the rules of the network. However, it can be specified that certain crimes, such as a claim of a phishing scam being committed, be dealt with by a human.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The Justice Web is a virtual network, in that the nodes are not necessarily connected or even anywhere near each other. Because of this, it would be inconvenient and potentially harmful to have services look up a computer’s MR on every connection attempt. To prevent this, MR will be stored in a central location, but propagated throughout the network.&lt;br /&gt;
&lt;br /&gt;
This is done using a master-slave approach to database replication. The Judges of the network store the “Master List”, and propagate the data to the “Slave Lists” stored by the services within the network. The records stored by the Slave Lists is decided by the thresholds that the specific service has put in place. As mentioned in the Morality Rating subsection, a service can set thresholds to determine if a computer should be allowed access. In the example, an MR of -100 would be blocked from the service. If a service were to have only this threshold in place, it would only need to be aware of computers with -100 MR, and so would only store that data in its Slave List. Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Judges define and use rules to determine whether a crime has been committed. A rule consists of three parts: The offense, the proof needed, and the severity of the punishment. The offense is a name assigned to the crime, which services can claim has been committed. The proof is the required information for the judges to be able to make a conviction. The severity of the punishment is an integer value to negate from the offender’s current MR.&lt;br /&gt;
&lt;br /&gt;
Each network deploying a Justice Web specifies their own set of rules. These rules are made available to the public so that services within the network are aware of the crimes they can report. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence===&lt;br /&gt;
&lt;br /&gt;
Evidence is used by the Justice Web to determine if a crime has been committed. Evidence is stored in encrypted logs located on a service’s computer, and submitted to the judges when a claim is made.&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
The type of evidence required is varied, and is defined by the Judges of a network. For a DDoS attack, the Justice Web would potential be able to look at the evidence logs and determine which computers were actively involved in the attack, and which was legitimate traffic through the analysis of statistical evidence[17].&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
Membership of a Justice Web would be primarily public-facing services seeking protection from attacks. However, because there is the capability of sharing resources based on a node’s MR, there is reason for computers to join the network simply for accessing to these resources.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Where should the master morality list be stored?&#039;&#039;&#039; - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;How are judges elected?&#039;&#039;&#039; - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exist. For example, the United Nations (UN) and the North Atlantic Treaty Organization (NATO) are organizations where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation such as the Justice Web, where hosts opt-in. This section briefly discuss a different approach to the Justice Web that attempts to deal with some of the restrictions mentioned above, at the expense of losing incremental deployability.&lt;br /&gt;
&lt;br /&gt;
=== Morality Rating ===&lt;br /&gt;
&lt;br /&gt;
The global implementation still requires the existence of a morality rating, but in a global setting, we require that all hosts have a morality rating built-in. By having each host store its own morality rating, we obsolete the concept of a &amp;quot;master list&amp;quot; or a &amp;quot;slave list&amp;quot; or morality ratings. The obvious requirement for a built-in morality rating is that the host itself should not be able to arbitrarily modify the value. One possible mechanism could be the use of a Trusted Platform (http://www.trustedcomputinggroup.org/developers/ TPM]) which allows encryption and decryption of data, but noes not allow the extraction of the private encryption key. Indeed, storing the morality rating within hosts rather than on external lists alleviates the need for distributed storage and allows better scalability, but also requires all hosts to be compliant with the mechanism. &lt;br /&gt;
&lt;br /&gt;
=== Connection management ===&lt;br /&gt;
&lt;br /&gt;
Due to the modified morality rating storage, there is no longer the need to look-up the morality rating of a host upon incoming connections. We therefore need a way to transmit the morality rating on each outgoing connection, so that the destination host (i.e., the server) can decide whether or not to allow the connection. A change of this type would mean changing underlying networking protocols to include a new field (the morality rating). If morality ratings are stored locally and transmitted as part of the network protocol, there would be far less overhead than in the Justice Web. &lt;br /&gt;
&lt;br /&gt;
=== Rules and Judges ===&lt;br /&gt;
&lt;br /&gt;
Similar to the Justice Web, there would need to be a standard set of rules that all hosts agree to. In the global implementation, agreeing upon a standard set of rules might prove to be difficult, since not all hosts/users at the global level have the same views on justice. The problem of judge election also becomes difficult at a global level. We leave this problem to future research. &lt;br /&gt;
&lt;br /&gt;
In summary, the global implementation could offer the same benefits as the Justice Web with much less overhead, but would require a full reboot of the Internet as well a new hardware, making it a realistically unlikely solution.&lt;br /&gt;
&lt;br /&gt;
==Use Cases==&lt;br /&gt;
&lt;br /&gt;
This section reviews three common attacks and describes how the computer-based justice system would deal with them &lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam===&lt;br /&gt;
The first deviant act we investigate is comment spam. This type of spam is typically generated by automated scripts which insert comments on blogs or other sites. Posted comments will generally contain links to other websites which attempt to sell a product or trick the user into revealing banking credentials. Although usually annoying, these comments can direct users to locations where malicious code may be downloaded, even if the original site hosting the comment was initially trusted. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Evidence collected.&#039;&#039;&#039; The comment being reported as spam as well as the website hosting the comment (forum, blog, etc.) The ID of the commenter is also collected, assuming we have a unique identifier for each commenting host. &lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&#039;&#039;&#039;Local implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Users report comment spam&lt;br /&gt;
*The morality of the offending host is adjusted if the evidence is found to incriminate the host. &lt;br /&gt;
*Based on the new morality rating, the offending host may not be allowed to post to the site depending on the restrictions of the hosting server&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Global implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Same method for reporting the comment spam, and for adjusting the morality rating as the local implementation above. &lt;br /&gt;
*If a host has a sufficiently low morality rating, the host site will disable the ability for the offending host to communicate with the site at all.&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is accessed by a large number of hosts, or a small number of hosts with high frequency. Services under a denial of service (DoS) or distributed denial of service (DDoS) attack are no longer able to serve legitimate requests [14].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Evidence collected.&#039;&#039;&#039; ID of any host connecting to the victim server for the duration of the attack.&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&#039;&#039;&#039;Local implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*The morality of each user is looked up to see if the request should be managed. This may cause even greater load on the host.&lt;br /&gt;
*Once it has been established that participants of the attack have an unacceptable morality rating, they are blocked from communication with the site.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Global implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Since morality rating is passed in with communication, requests could be filtered out (i.e. at a firewall level).&lt;br /&gt;
*Any incoming communication with a bad enough morality would simply be ignored.&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a malicious site pretends to be a legitimate site, tricking users into revealing banking or personal information.&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
Similar to the comment spam attack, once a phishing site is reported and verified by the judges, the morality rating of the hosting node is lowered.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Applying justice to a distributed system requires an understanding of how society runs in a teleologic or retributive method of punishments, as well as knowing the range between purposely and negligently participating in such an act. Discussions of punishment and intent brought up another social construct that exists in society - morality. When looking at a single computer, it is hard for us to consider that that computer had &amp;quot;intended&amp;quot; on doing something, or even that it had felt badly if we made it do a bunch or repetitive operations as a form of punishment. Even though implementing emotions and a care for self preservation is difficult for a computer, we can at least apply a morality value to each computer node, so that it may be judged by any individual that plans on communicating/interacting with that node. By discussing specific cases in which a justice system would take part in a distributed system, we can conceptualize a basis upon which a future implementation of justice on computers might be possible. Given the advantages and the disadvantages of implementing such a system on a local and global scale, it is evident that there requires a more in depth look into how some technical aspects, as well as the assumptions to be supported by the other factors on the internet(attribution, reputation, contracts), must be upheld in order to seek the means to fight injustice, and to turn fear against those who prey on the fearful, as malicious users do to users who do not have protection - this is what the justice web is for.&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;br /&gt;
&lt;br /&gt;
[17] S. Yu, W. Zhou, R. Doss, &#039;&#039;Information theory based detection against network behavior mimicking DDoS attacks&#039;&#039;, IEEE, April 2008, [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1]. Last visited April 2011.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9456</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9456"/>
		<updated>2011-04-12T00:47:04Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Case 2: Denial of Service */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
[[https://docs.google.com/present/edit?id=0AQJ2IGOeo68XZGhuNnJ0YjRfM2doZDg3Ymc5&amp;amp;hl=en&amp;amp;authkey=CK7Mk4YO Presentation]]&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four malicious acts; comment spam, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes the above discussion involving justice, and applies it to the management of a computer network. The implementation is designed to be incrementally-deployable, so that it would be realistic for a network to use the proposed system. The implementation is entitled the “Justice Web”. &lt;br /&gt;
&lt;br /&gt;
The purpose of the Justice Web is to protect public-facing services from attacks coming from outside the network. This is accomplished by keeping a record of the criminal acts made by connections, and allowing the services access to these records. Criminal acts in this case are actions done by a connection that is considered harmful to the network. The record kept by the network is a “Morality Rating”, which is an integer meant to reflect the severity of the crime committed.&lt;br /&gt;
&lt;br /&gt;
===Assumptions===&lt;br /&gt;
&lt;br /&gt;
Certain assumptions must be made regarding the other class projects in order for this implementation to be deployable. Most importantly, it is assumed that there is some way in which the network can uniquely identify a computer that connects to the network. This allows the Justice Web to keep a criminal log of clients, and recognize if an offender is attempting to connect.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
&lt;br /&gt;
Morality Rating (MR) is an integer assigned to computers that have connected to a service within the Justice Web. The purpose of the MR is to keep track of a computer’s past offenses, and allow services to restrict access using thresholds. For instance, a service within the Justice Web could restrict access to those above -100 MR.&lt;br /&gt;
&lt;br /&gt;
While the primary purpose of the Justice Web is to protect against attackers from outside the network, every node in the Justice Web is assigned an MR, which increases and decreases based on their actions within the network. Ideally, those with higher MR are allowed access to more shared resources, though this would be implementation specific.&lt;br /&gt;
&lt;br /&gt;
The MR assigned to a computer is local to the Justice Web that assigned the rating. For example, if two separate networks deploy a Justice Web, the ratings they assign do not affect the other network’s ratings.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
In order to assign MR to offenders, an authority figure is needed to declare if a crime has been committed. In the Justice Web, this role is taken by the Judges, who may be one or more computers within the network. It is the Judges’ responsibility to create the rules of the network, gather the evidence when a claim is made, declare if a crime has been committed, and assign a new MR based on the ruling.&lt;br /&gt;
&lt;br /&gt;
How a Judge is picked isn’t set in stone, but in general it would be the node(s) in the network with the highest MR. Alternatively, the Judges could be picked through some democratic process.&lt;br /&gt;
&lt;br /&gt;
The judgments made are mostly automated, based on the rules of the network. However, it can be specified that certain crimes, such as a claim of a phishing scam being committed, be dealt with by a human.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The Justice Web is a virtual network, in that the nodes are not necessarily connected or even anywhere near each other. Because of this, it would be inconvenient and potentially harmful to have services look up a computer’s MR on every connection attempt. To prevent this, MR will be stored in a central location, but propagated throughout the network.&lt;br /&gt;
&lt;br /&gt;
This is done using a master-slave approach to database replication. The Judges of the network store the “Master List”, and propagate the data to the “Slave Lists” stored by the services within the network. The records stored by the Slave Lists is decided by the thresholds that the specific service has put in place. As mentioned in the Morality Rating subsection, a service can set thresholds to determine if a computer should be allowed access. In the example, an MR of -100 would be blocked from the service. If a service were to have only this threshold in place, it would only need to be aware of computers with -100 MR, and so would only store that data in its Slave List. Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Judges define and use rules to determine whether a crime has been committed. A rule consists of three parts: The offense, the proof needed, and the severity of the punishment. The offense is a name assigned to the crime, which services can claim has been committed. The proof is the required information for the judges to be able to make a conviction. The severity of the punishment is an integer value to negate from the offender’s current MR.&lt;br /&gt;
&lt;br /&gt;
Each network deploying a Justice Web specifies their own set of rules. These rules are made available to the public so that services within the network are aware of the crimes they can report. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence===&lt;br /&gt;
&lt;br /&gt;
Evidence is used by the Justice Web to determine if a crime has been committed. Evidence is stored in encrypted logs located on a service’s computer, and submitted to the judges when a claim is made.&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
The type of evidence required is varied, and is defined by the Judges of a network. For a DDoS attack, the Justice Web would potential be able to look at the evidence logs and determine which computers were actively involved in the attack, and which was legitimate traffic through the analysis of statistical evidence[17].&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
Membership of a Justice Web would be primarily public-facing services seeking protection from attacks. However, because there is the capability of sharing resources based on a node’s MR, there is reason for computers to join the network simply for accessing to these resources.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Where should the master morality list be stored?&#039;&#039;&#039; - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;How are judges elected?&#039;&#039;&#039; - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exist. For example, the United Nations (UN) and the North Atlantic Treaty Organization (NATO) are organizations where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation such as the Justice Web, where hosts opt-in. This section briefly discuss a different approach to the Justice Web that attempts to deal with some of the restrictions mentioned above, at the expense of losing incremental deployability.&lt;br /&gt;
&lt;br /&gt;
=== Morality Rating ===&lt;br /&gt;
&lt;br /&gt;
The global implementation still requires the existence of a morality rating, but in a global setting, we require that all hosts have a morality rating built-in. By having each host store its own morality rating, we obsolete the concept of a &amp;quot;master list&amp;quot; or a &amp;quot;slave list&amp;quot; or morality ratings. The obvious requirement for a built-in morality rating is that the host itself should not be able to arbitrarily modify the value. One possible mechanism could be the use of a Trusted Platform (http://www.trustedcomputinggroup.org/developers/ TPM]) which allows encryption and decryption of data, but noes not allow the extraction of the private encryption key. Indeed, storing the morality rating within hosts rather than on external lists alleviates the need for distributed storage and allows better scalability, but also requires all hosts to be compliant with the mechanism. &lt;br /&gt;
&lt;br /&gt;
=== Connection management ===&lt;br /&gt;
&lt;br /&gt;
Due to the modified morality rating storage, there is no longer the need to look-up the morality rating of a host upon incoming connections. We therefore need a way to transmit the morality rating on each outgoing connection, so that the destination host (i.e., the server) can decide whether or not to allow the connection. A change of this type would mean changing underlying networking protocols to include a new field (the morality rating). If morality ratings are stored locally and transmitted as part of the network protocol, there would be far less overhead than in the Justice Web. &lt;br /&gt;
&lt;br /&gt;
=== Rules and Judges ===&lt;br /&gt;
&lt;br /&gt;
Similar to the Justice Web, there would need to be a standard set of rules that all hosts agree to. In the global implementation, agreeing upon a standard set of rules might prove to be difficult, since not all hosts/users at the global level have the same views on justice. The problem of judge election also becomes difficult at a global level. We leave this problem to future research. &lt;br /&gt;
&lt;br /&gt;
In summary, the global implementation could offer the same benefits as the Justice Web with much less overhead, but would require a full reboot of the Internet as well a new hardware, making it a realistically unlikely solution.&lt;br /&gt;
&lt;br /&gt;
==Use Cases==&lt;br /&gt;
&lt;br /&gt;
This section reviews three common attacks and describes how the computer-based justice system would deal with them &lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam===&lt;br /&gt;
The first deviant act we investigate is comment spam. This type of spam is typically generated by automated scripts which insert comments on blogs or other sites. Posted comments will generally contain links to other websites which attempt to sell a product or trick the user into revealing banking credentials. Although usually annoying, these comments can direct users to locations where malicious code may be downloaded, even if the original site hosting the comment was initially trusted. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Evidence collected.&#039;&#039;&#039; The comment being reported as spam as well as the website hosting the comment (forum, blog, etc.) The ID of the commenter is also collected, assuming we have a unique identifier for each commenting host. &lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&#039;&#039;&#039;Local implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Users report comment spam&lt;br /&gt;
*The morality of the offending host is adjusted if the evidence is found to incriminate the host. &lt;br /&gt;
*Based on the new morality rating, the offending host may not be allowed to post to the site depending on the restrictions of the hosting server&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Global implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Same method for reporting the comment spam, and for adjusting the morality rating as the local implementation above. &lt;br /&gt;
*If a host has a sufficiently low morality rating, the host site will disable the ability for the offending host to communicate with the site at all.&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is accessed by a large number of hosts, or a small number of hosts with high frequency. Services under a denial of service (DoS) or distributed denial of service (DDoS) attack are no longer able to serve legitimate requests [14].&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&#039;&#039;&#039;Local implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*The morality of each user is looked up to see if the request should be managed. This may cause even greater load on the host.&lt;br /&gt;
*Once it has been established that participants of the attack have an unacceptable morality rating, they are blocked from communication with the site.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Global implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Since morality rating is passed in with communication, requests could be filtered out (i.e. at a firewall level).&lt;br /&gt;
*Any incoming communication with a bad enough morality would simply be ignored.&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a malicious site pretends to be a legitimate site, tricking users into revealing banking or personal information.&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
Similar to the comment spam attack, once a phishing site is reported and verified by the judges, the morality rating of the hosting node is lowered.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Applying justice to a distributed system requires an understanding of how society runs in a teleologic or retributive method of punishments, as well as knowing the range between purposely and negligently participating in such an act. Discussions of punishment and intent brought up another social construct that exists in society - morality. When looking at a single computer, it is hard for us to consider that that computer had &amp;quot;intended&amp;quot; on doing something, or even that it had felt badly if we made it do a bunch or repetitive operations as a form of punishment. Even though implementing emotions and a care for self preservation is difficult for a computer, we can at least apply a morality value to each computer node, so that it may be judged by any individual that plans on communicating/interacting with that node. By discussing specific cases in which a justice system would take part in a distributed system, we can conceptualize a basis upon which a future implementation of justice on computers might be possible. Given the advantages and the disadvantages of implementing such a system on a local and global scale, it is evident that there requires a more in depth look into how some technical aspects, as well as the assumptions to be supported by the other factors on the internet(attribution, reputation, contracts), must be upheld in order to seek the means to fight injustice, and to turn fear against those who prey on the fearful, as malicious users do to users who do not have protection - this is what the justice web is for.&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;br /&gt;
&lt;br /&gt;
[17] S. Yu, W. Zhou, R. Doss, &#039;&#039;Information theory based detection against network behavior mimicking DDoS attacks&#039;&#039;, IEEE, April 2008, [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1]. Last visited April 2011.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9455</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9455"/>
		<updated>2011-04-12T00:45:22Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Case 1: Comment Spam */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
[[https://docs.google.com/present/edit?id=0AQJ2IGOeo68XZGhuNnJ0YjRfM2doZDg3Ymc5&amp;amp;hl=en&amp;amp;authkey=CK7Mk4YO Presentation]]&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four malicious acts; comment spam, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes the above discussion involving justice, and applies it to the management of a computer network. The implementation is designed to be incrementally-deployable, so that it would be realistic for a network to use the proposed system. The implementation is entitled the “Justice Web”. &lt;br /&gt;
&lt;br /&gt;
The purpose of the Justice Web is to protect public-facing services from attacks coming from outside the network. This is accomplished by keeping a record of the criminal acts made by connections, and allowing the services access to these records. Criminal acts in this case are actions done by a connection that is considered harmful to the network. The record kept by the network is a “Morality Rating”, which is an integer meant to reflect the severity of the crime committed.&lt;br /&gt;
&lt;br /&gt;
===Assumptions===&lt;br /&gt;
&lt;br /&gt;
Certain assumptions must be made regarding the other class projects in order for this implementation to be deployable. Most importantly, it is assumed that there is some way in which the network can uniquely identify a computer that connects to the network. This allows the Justice Web to keep a criminal log of clients, and recognize if an offender is attempting to connect.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
&lt;br /&gt;
Morality Rating (MR) is an integer assigned to computers that have connected to a service within the Justice Web. The purpose of the MR is to keep track of a computer’s past offenses, and allow services to restrict access using thresholds. For instance, a service within the Justice Web could restrict access to those above -100 MR.&lt;br /&gt;
&lt;br /&gt;
While the primary purpose of the Justice Web is to protect against attackers from outside the network, every node in the Justice Web is assigned an MR, which increases and decreases based on their actions within the network. Ideally, those with higher MR are allowed access to more shared resources, though this would be implementation specific.&lt;br /&gt;
&lt;br /&gt;
The MR assigned to a computer is local to the Justice Web that assigned the rating. For example, if two separate networks deploy a Justice Web, the ratings they assign do not affect the other network’s ratings.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
In order to assign MR to offenders, an authority figure is needed to declare if a crime has been committed. In the Justice Web, this role is taken by the Judges, who may be one or more computers within the network. It is the Judges’ responsibility to create the rules of the network, gather the evidence when a claim is made, declare if a crime has been committed, and assign a new MR based on the ruling.&lt;br /&gt;
&lt;br /&gt;
How a Judge is picked isn’t set in stone, but in general it would be the node(s) in the network with the highest MR. Alternatively, the Judges could be picked through some democratic process.&lt;br /&gt;
&lt;br /&gt;
The judgments made are mostly automated, based on the rules of the network. However, it can be specified that certain crimes, such as a claim of a phishing scam being committed, be dealt with by a human.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The Justice Web is a virtual network, in that the nodes are not necessarily connected or even anywhere near each other. Because of this, it would be inconvenient and potentially harmful to have services look up a computer’s MR on every connection attempt. To prevent this, MR will be stored in a central location, but propagated throughout the network.&lt;br /&gt;
&lt;br /&gt;
This is done using a master-slave approach to database replication. The Judges of the network store the “Master List”, and propagate the data to the “Slave Lists” stored by the services within the network. The records stored by the Slave Lists is decided by the thresholds that the specific service has put in place. As mentioned in the Morality Rating subsection, a service can set thresholds to determine if a computer should be allowed access. In the example, an MR of -100 would be blocked from the service. If a service were to have only this threshold in place, it would only need to be aware of computers with -100 MR, and so would only store that data in its Slave List. Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Judges define and use rules to determine whether a crime has been committed. A rule consists of three parts: The offense, the proof needed, and the severity of the punishment. The offense is a name assigned to the crime, which services can claim has been committed. The proof is the required information for the judges to be able to make a conviction. The severity of the punishment is an integer value to negate from the offender’s current MR.&lt;br /&gt;
&lt;br /&gt;
Each network deploying a Justice Web specifies their own set of rules. These rules are made available to the public so that services within the network are aware of the crimes they can report. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence===&lt;br /&gt;
&lt;br /&gt;
Evidence is used by the Justice Web to determine if a crime has been committed. Evidence is stored in encrypted logs located on a service’s computer, and submitted to the judges when a claim is made.&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
The type of evidence required is varied, and is defined by the Judges of a network. For a DDoS attack, the Justice Web would potential be able to look at the evidence logs and determine which computers were actively involved in the attack, and which was legitimate traffic through the analysis of statistical evidence[17].&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
Membership of a Justice Web would be primarily public-facing services seeking protection from attacks. However, because there is the capability of sharing resources based on a node’s MR, there is reason for computers to join the network simply for accessing to these resources.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Where should the master morality list be stored?&#039;&#039;&#039; - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;How are judges elected?&#039;&#039;&#039; - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exist. For example, the United Nations (UN) and the North Atlantic Treaty Organization (NATO) are organizations where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation such as the Justice Web, where hosts opt-in. This section briefly discuss a different approach to the Justice Web that attempts to deal with some of the restrictions mentioned above, at the expense of losing incremental deployability.&lt;br /&gt;
&lt;br /&gt;
=== Morality Rating ===&lt;br /&gt;
&lt;br /&gt;
The global implementation still requires the existence of a morality rating, but in a global setting, we require that all hosts have a morality rating built-in. By having each host store its own morality rating, we obsolete the concept of a &amp;quot;master list&amp;quot; or a &amp;quot;slave list&amp;quot; or morality ratings. The obvious requirement for a built-in morality rating is that the host itself should not be able to arbitrarily modify the value. One possible mechanism could be the use of a Trusted Platform (http://www.trustedcomputinggroup.org/developers/ TPM]) which allows encryption and decryption of data, but noes not allow the extraction of the private encryption key. Indeed, storing the morality rating within hosts rather than on external lists alleviates the need for distributed storage and allows better scalability, but also requires all hosts to be compliant with the mechanism. &lt;br /&gt;
&lt;br /&gt;
=== Connection management ===&lt;br /&gt;
&lt;br /&gt;
Due to the modified morality rating storage, there is no longer the need to look-up the morality rating of a host upon incoming connections. We therefore need a way to transmit the morality rating on each outgoing connection, so that the destination host (i.e., the server) can decide whether or not to allow the connection. A change of this type would mean changing underlying networking protocols to include a new field (the morality rating). If morality ratings are stored locally and transmitted as part of the network protocol, there would be far less overhead than in the Justice Web. &lt;br /&gt;
&lt;br /&gt;
=== Rules and Judges ===&lt;br /&gt;
&lt;br /&gt;
Similar to the Justice Web, there would need to be a standard set of rules that all hosts agree to. In the global implementation, agreeing upon a standard set of rules might prove to be difficult, since not all hosts/users at the global level have the same views on justice. The problem of judge election also becomes difficult at a global level. We leave this problem to future research. &lt;br /&gt;
&lt;br /&gt;
In summary, the global implementation could offer the same benefits as the Justice Web with much less overhead, but would require a full reboot of the Internet as well a new hardware, making it a realistically unlikely solution.&lt;br /&gt;
&lt;br /&gt;
==Use Cases==&lt;br /&gt;
&lt;br /&gt;
This section reviews three common attacks and describes how the computer-based justice system would deal with them &lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam===&lt;br /&gt;
The first deviant act we investigate is comment spam. This type of spam is typically generated by automated scripts which insert comments on blogs or other sites. Posted comments will generally contain links to other websites which attempt to sell a product or trick the user into revealing banking credentials. Although usually annoying, these comments can direct users to locations where malicious code may be downloaded, even if the original site hosting the comment was initially trusted. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Evidence collected.&#039;&#039;&#039; The comment being reported as spam as well as the website hosting the comment (forum, blog, etc.) The ID of the commenter is also collected, assuming we have a unique identifier for each commenting host. &lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&#039;&#039;&#039;Local implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Users report comment spam&lt;br /&gt;
*The morality of the offending host is adjusted if the evidence is found to incriminate the host. &lt;br /&gt;
*Based on the new morality rating, the offending host may not be allowed to post to the site depending on the restrictions of the hosting server&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Global implementation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
*Same method for reporting the comment spam, and for adjusting the morality rating as the local implementation above. &lt;br /&gt;
*If a host has a sufficiently low morality rating, the host site will disable the ability for the offending host to communicate with the site at all.&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is accessed by a large number of hosts, or a small number of hosts with high frequency. Services under a denial of service (DoS) or distributed denial of service (DDoS) attack are no longer able to serve legitimate requests [14].&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&lt;br /&gt;
In the Justice Web, the server will have to look up the morality rating for each host that is participating in the attack which may further increase the load on the server. The global implementation would see the morality ratings of each incoming connection and could filter (for e.g., at a firewall level) hosts that have a certain rating. While Dos is generally regarded as a very difficult problem to solve, the morality rating of hosts participating would be lowered, possibly limiting their ability to perform an attack in the future. &lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a malicious site pretends to be a legitimate site, tricking users into revealing banking or personal information.&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
Similar to the comment spam attack, once a phishing site is reported and verified by the judges, the morality rating of the hosting node is lowered.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
Applying justice to a distributed system requires an understanding of how society runs in a teleologic or retributive method of punishments, as well as knowing the range between purposely and negligently participating in such an act. Discussions of punishment and intent brought up another social construct that exists in society - morality. When looking at a single computer, it is hard for us to consider that that computer had &amp;quot;intended&amp;quot; on doing something, or even that it had felt badly if we made it do a bunch or repetitive operations as a form of punishment. Even though implementing emotions and a care for self preservation is difficult for a computer, we can at least apply a morality value to each computer node, so that it may be judged by any individual that plans on communicating/interacting with that node. By discussing specific cases in which a justice system would take part in a distributed system, we can conceptualize a basis upon which a future implementation of justice on computers might be possible. Given the advantages and the disadvantages of implementing such a system on a local and global scale, it is evident that there requires a more in depth look into how some technical aspects, as well as the assumptions to be supported by the other factors on the internet(attribution, reputation, contracts), must be upheld in order to seek the means to fight injustice, and to turn fear against those who prey on the fearful, as malicious users do to users who do not have protection - this is what the justice web is for.&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;br /&gt;
&lt;br /&gt;
[17] S. Yu, W. Zhou, R. Doss, &#039;&#039;Information theory based detection against network behavior mimicking DDoS attacks&#039;&#039;, IEEE, April 2008, [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1]. Last visited April 2011.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9103</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9103"/>
		<updated>2011-04-05T16:09:52Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
[[https://docs.google.com/present/edit?id=0AQJ2IGOeo68XZGhuNnJ0YjRfM2doZDg3Ymc5&amp;amp;hl=en&amp;amp;authkey=CK7Mk4YO Presentation]]&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four malicious acts; comment spam, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes the above discussion involving justice, and applies it to the management of a computer network. The implementation is designed to be incrementally-deployable, so that it would be realistic for a network to use the proposed system. The implementation is entitled the “Justice Web”. &lt;br /&gt;
&lt;br /&gt;
The purpose of the Justice Web is to protect public-facing services from attacks coming from outside the network. This is accomplished by keeping a record of the criminal acts made by connections, and allowing the services access to these records. Criminal acts in this case are actions done by a connection that is considered harmful to the network. The record kept by the network is a “Morality Rating”, which is an integer meant to reflect the severity of the crime committed.&lt;br /&gt;
&lt;br /&gt;
===Assumptions===&lt;br /&gt;
&lt;br /&gt;
Certain assumptions must be made regarding the other class projects in order for this implementation to be deployable. Most importantly, it is assumed that there is some way in which the network can uniquely identify a computer that connects to the network. This allows the Justice Web to keep a criminal log of clients, and recognize if an offender is attempting to connect.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
&lt;br /&gt;
Morality Rating (MR) is an integer assigned to computers that have connected to a service within the Justice Web. The purpose of the MR is to keep track of a computer’s past offenses, and allow services to restrict access using thresholds. For instance, a service within the Justice Web could restrict access to those above -100 MR.&lt;br /&gt;
&lt;br /&gt;
While the primary purpose of the Justice Web is to protect against attackers from outside the network, every node in the Justice Web is assigned an MR, which increases and decreases based on their actions within the network. Ideally, those with higher MR are allowed access to more shared resources, though this would be implementation specific.&lt;br /&gt;
&lt;br /&gt;
The MR assigned to a computer is local to the Justice Web that assigned the rating. For example, if two separate networks deploy a Justice Web, the ratings they assign do not affect the other network’s ratings.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
In order to assign MR to offenders, an authority figure is needed to declare if a crime has been committed. In the Justice Web, this role is taken by the Judges, who may be one or more computers within the network. It is the Judges’ responsibility to create the rules of the network, gather the evidence when a claim is made, declare if a crime has been committed, and assign a new MR based on the ruling.&lt;br /&gt;
&lt;br /&gt;
How a Judge is picked isn’t set in stone, but in general it would be the node(s) in the network with the highest MR. Alternatively, the Judges could be picked through some democratic process.&lt;br /&gt;
&lt;br /&gt;
The judgments made are mostly automated, based on the rules of the network. However, it can be specified that certain crimes, such as a claim of a phishing scam being committed, be dealt with by a human.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The Justice Web is a virtual network, in that the nodes are not necessarily connected or even anywhere near each other. Because of this, it would be inconvenient and potentially harmful to have services look up a computer’s MR on every connection attempt. To prevent this, MR will be stored in a central location, but propagated throughout the network.&lt;br /&gt;
&lt;br /&gt;
This is done using a master-slave approach to database replication. The Judges of the network store the “Master List”, and propagate the data to the “Slave Lists” stored by the services within the network. The records stored by the Slave Lists is decided by the thresholds that the specific service has put in place. As mentioned in the Morality Rating subsection, a service can set thresholds to determine if a computer should be allowed access. In the example, an MR of -100 would be blocked from the service. If a service were to have only this threshold in place, it would only need to be aware of computers with -100 MR, and so would only store that data in its Slave List. Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Judges define and use rules to determine whether a crime has been committed. A rule consists of three parts: The offense, the proof needed, and the severity of the punishment. The offense is a name assigned to the crime, which services can claim has been committed. The proof is the required information for the judges to be able to make a conviction. The severity of the punishment is an integer value to negate from the offender’s current MR.&lt;br /&gt;
&lt;br /&gt;
Each network deploying a Justice Web specifies their own set of rules. These rules are made available to the public so that services within the network are aware of the crimes they can report. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence===&lt;br /&gt;
&lt;br /&gt;
Evidence is used by the Justice Web to determine if a crime has been committed. Evidence is stored in encrypted logs located on a service’s computer, and submitted to the judges when a claim is made.&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
The type of evidence required is varied, and is defined by the Judges of a network. For a DDoS attack, the Justice Web would potential be able to look at the evidence logs and determine which computers were actively involved in the attack, and which was legitimate traffic through the analysis of statistical evidence[17].&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
Membership of a Justice Web would be primarily public-facing services seeking protection from attacks. However, because there is the capability of sharing resources based on a node’s MR, there is reason for computers to join the network simply for accessing to these resources.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Where should the master morality list be stored?&#039;&#039;&#039; - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;How are judges elected?&#039;&#039;&#039; - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exist. For example, the United Nations (UN) and the North Atlantic Treaty Organization (NATO) are organizations where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation such as the Justice Web, where hosts opt-in. This section briefly discuss a different approach to the Justice Web that attempts to deal with some of the restrictions mentioned above, at the expense of losing incremental deployability.&lt;br /&gt;
&lt;br /&gt;
=== Morality Rating ===&lt;br /&gt;
&lt;br /&gt;
The global implementation still requires the existence of a morality rating, but in a global setting, we require that all hosts have a morality rating built-in. By having each host store its own morality rating, we obsolete the concept of a &amp;quot;master list&amp;quot; or a &amp;quot;slave list&amp;quot; or morality ratings. The obvious requirement for a built-in morality rating is that the host itself should not be able to arbitrarily modify the value. One possible mechanism could be the use of a Trusted Platform (http://www.trustedcomputinggroup.org/developers/ TPM]) which allows encryption and decryption of data, but noes not allow the extraction of the private encryption key. Indeed, storing the morality rating within hosts rather than on external lists alleviates the need for distributed storage and allows better scalability, but also requires all hosts to be compliant with the mechanism. &lt;br /&gt;
&lt;br /&gt;
=== Connection management ===&lt;br /&gt;
&lt;br /&gt;
Due to the modified morality rating storage, there is no longer the need to look-up the morality rating of a host upon incoming connections. We therefore need a way to transmit the morality rating on each outgoing connection, so that the destination host (i.e., the server) can decide whether or not to allow the connection. A change of this type would mean changing underlying networking protocols to include a new field (the morality rating). If morality ratings are stored locally and transmitted as part of the network protocol, there would be far less overhead than in the Justice Web. &lt;br /&gt;
&lt;br /&gt;
=== Rules and Judges ===&lt;br /&gt;
&lt;br /&gt;
Similar to the Justice Web, there would need to be a standard set of rules that all hosts agree to. In the global implementation, agreeing upon a standard set of rules might prove to be difficult, since not all hosts/users at the global level have the same views on justice. The problem of judge election also becomes difficult at a global level. We leave this problem to future research. &lt;br /&gt;
&lt;br /&gt;
In summary, the global implementation could offer the same benefits as the Justice Web with much less overhead, but would require a full reboot of the Internet as well a new hardware, making it a realistically unlikely solution.&lt;br /&gt;
&lt;br /&gt;
==Use Cases==&lt;br /&gt;
&lt;br /&gt;
This section reviews three common attacks and describes how the computer-based justice system would deal with them &lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam===&lt;br /&gt;
The first deviant act we investigate is comment spam. This type of spam is typically generated by automated scripts which insert comments on blogs or other sites. Posted comments will generally contain links to other websites which attempt to sell a product or trick the user into revealing banking credentials. Although usually annoying, these comments can direct users to locations where malicious code may be downloaded, even if the original site hosting the comment was initially trusted. &lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
Web servers that allow open comments can adjust the required minimum morality rating to post a comment. Additionally, if comment spam is detected by another user or the site administrator, the message and the ID of the host that posted the comment is reported. This results in a host losing morality rating for posting comment spam which may limit its ability to post in the future. In the global implementation, web servers would reject access to the service altogether from the first initial connection. &lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is accessed by a large number of hosts, or a small number of hosts with high frequency. Services under a denial of service (DoS) or distributed denial of service (DDoS) attack are no longer able to serve legitimate requests [14].&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&lt;br /&gt;
In the Justice Web, the server will have to look up the morality rating for each host that is participating in the attack which may further increase the load on the server. The global implementation would see the morality ratings of each incoming connection and could filter (for e.g., at a firewall level) hosts that have a certain rating. While Dos is generally regarded as a very difficult problem to solve, the morality rating of hosts participating would be lowered, possibly limiting their ability to perform an attack in the future. &lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a malicious site pretends to be a legitimate site, tricking users into revealing banking or personal information.&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
Similar to the comment spam attack, once a phishing site is reported and verified by the judges, the morality rating of the hosting node is lowered.&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;br /&gt;
&lt;br /&gt;
[17] S. Yu, W. Zhou, R. Doss, &#039;&#039;Information theory based detection against network behavior mimicking DDoS attacks&#039;&#039;, IEEE, April 2008, [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1]. Last visited April 2011.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9071</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9071"/>
		<updated>2011-04-03T17:19:36Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Abstract */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four malicious acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three primary reasons a host may choose to join a Justice Web:&lt;br /&gt;
&lt;br /&gt;
1) The host is a server seeking to protect itself from malicious traffic by accessing a collective history source hosts&#039; connections.&lt;br /&gt;
&lt;br /&gt;
2) the host is a client seeking to determine how &amp;quot;safe&amp;quot; it would be to access a given server based on that server&#039;s past actions.&lt;br /&gt;
&lt;br /&gt;
3) The host is either a server or client system seeking access to restricted resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a host retains its existing morality rating.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Where should the master morality list be stored?&#039;&#039;&#039; - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;How are judges elected?&#039;&#039;&#039; - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exist. For example, the United Nations (UN) and the North Atlantic Treaty Organization (NATO) are organizations where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation such as the Justice Web, where hosts opt-in. This section briefly discuss a different approach to the Justice Web that attempts to deal with some of the restrictions mentioned above, at the expense of losing incremental deployability.&lt;br /&gt;
&lt;br /&gt;
=== Morality Rating ===&lt;br /&gt;
&lt;br /&gt;
The global implementation still requires the existence of a morality rating, but in a global setting, we require that all hosts have a morality rating built-in. By having each host store its own morality rating, we obsolete the concept of a &amp;quot;master list&amp;quot; or a &amp;quot;slave list&amp;quot; or morality ratings. The obvious requirement for a built-in morality rating is that the host itself should not be able to arbitrarily modify the value. One possible mechanism could be the use of a Trusted Platform (http://www.trustedcomputinggroup.org/developers/ TPM]) which allows encryption and decryption of data, but noes not allow the extraction of the private encryption key. Indeed, storing the morality rating within hosts rather than on external lists alleviates the need for distributed storage and allows better scalability, but also requires all hosts to be compliant with the mechanism. &lt;br /&gt;
&lt;br /&gt;
=== Connection management ===&lt;br /&gt;
&lt;br /&gt;
Due to the modified morality rating storage, there is no longer the need to look-up the morality rating of a host upon incoming connections. We therefore need a way to transmit the morality rating on each outgoing connection, so that the destination host (i.e., the server) can decide whether or not to allow the connection. A change of this type would mean changing underlying networking protocols to include a new field (the morality rating). If morality ratings are stored locally and transmitted as part of the network protocol, there would be far less overhead than in the Justice Web. &lt;br /&gt;
&lt;br /&gt;
=== Rules and Judges ===&lt;br /&gt;
&lt;br /&gt;
Similar to the Justice Web, there would need to be a standard set of rules that all hosts agree to. In the global implementation, agreeing upon a standard set of rules might prove to be difficult, since not all hosts/users at the global level have the same views on justice. The problem of judge election also becomes difficult at a global level. We leave this problem to future research. &lt;br /&gt;
&lt;br /&gt;
In summary, the global implementation could offer the same benefits as the Justice Web with much less overhead, but would require a full reboot of the Internet as well a new hardware, making it a realistically unlikely solution.&lt;br /&gt;
&lt;br /&gt;
==Use Cases==&lt;br /&gt;
&lt;br /&gt;
This section reviews three common attacks and describes how the computer-based justice system would deal with them &lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam===&lt;br /&gt;
The first deviant act we investigate is comment spam. This type of spam is typically generated by automated scripts which insert comments on blogs or other sites. Posted comments will generally contain links to other websites which attempt to sell a product or trick the user into revealing banking credentials. Although usually annoying, these comments can direct users to locations where malicious code may be downloaded, even if the original site hosting the comment was initially trusted. &lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
Web servers that allow open comments can adjust the required minimum morality rating to post a comment. Additionally, if comment spam is detected by another user or the site administrator, the message and the ID of the host that posted the comment is reported. This results in a host losing morality rating for posting comment spam which may limit its ability to post in the future. In the global implementation, web servers would reject access to the service altogether from the first initial connection. &lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is accessed by a large number of hosts, or a small number of hosts with high frequency. Services under a denial of service (DoS) or distributed denial of service (DDoS) attack are no longer able to serve legitimate requests [14].&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&lt;br /&gt;
In the Justice Web, the server will have to look up the morality rating for each host that is participating in the attack which may further increase the load on the server. The global implementation would see the morality ratings of each incoming connection and could filter (for e.g., at a firewall level) hosts that have a certain rating. While Dos is generally regarded as a very difficult problem to solve, the morality rating of hosts participating would be lowered, possibly limiting their ability to perform an attack in the future. &lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a malicious site pretends to be a legitimate site, tricking users into revealing banking or personal information.&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
Similar to the comment spam attack, once a phishing site is reported and verified by the judges, the morality rating of the hosting node is lowered.&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9070</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9070"/>
		<updated>2011-04-03T17:18:29Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Use Cases */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three primary reasons a host may choose to join a Justice Web:&lt;br /&gt;
&lt;br /&gt;
1) The host is a server seeking to protect itself from malicious traffic by accessing a collective history source hosts&#039; connections.&lt;br /&gt;
&lt;br /&gt;
2) the host is a client seeking to determine how &amp;quot;safe&amp;quot; it would be to access a given server based on that server&#039;s past actions.&lt;br /&gt;
&lt;br /&gt;
3) The host is either a server or client system seeking access to restricted resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a host retains its existing morality rating.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Where should the master morality list be stored?&#039;&#039;&#039; - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;How are judges elected?&#039;&#039;&#039; - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exist. For example, the United Nations (UN) and the North Atlantic Treaty Organization (NATO) are organizations where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation such as the Justice Web, where hosts opt-in. This section briefly discuss a different approach to the Justice Web that attempts to deal with some of the restrictions mentioned above, at the expense of losing incremental deployability.&lt;br /&gt;
&lt;br /&gt;
=== Morality Rating ===&lt;br /&gt;
&lt;br /&gt;
The global implementation still requires the existence of a morality rating, but in a global setting, we require that all hosts have a morality rating built-in. By having each host store its own morality rating, we obsolete the concept of a &amp;quot;master list&amp;quot; or a &amp;quot;slave list&amp;quot; or morality ratings. The obvious requirement for a built-in morality rating is that the host itself should not be able to arbitrarily modify the value. One possible mechanism could be the use of a Trusted Platform (http://www.trustedcomputinggroup.org/developers/ TPM]) which allows encryption and decryption of data, but noes not allow the extraction of the private encryption key. Indeed, storing the morality rating within hosts rather than on external lists alleviates the need for distributed storage and allows better scalability, but also requires all hosts to be compliant with the mechanism. &lt;br /&gt;
&lt;br /&gt;
=== Connection management ===&lt;br /&gt;
&lt;br /&gt;
Due to the modified morality rating storage, there is no longer the need to look-up the morality rating of a host upon incoming connections. We therefore need a way to transmit the morality rating on each outgoing connection, so that the destination host (i.e., the server) can decide whether or not to allow the connection. A change of this type would mean changing underlying networking protocols to include a new field (the morality rating). If morality ratings are stored locally and transmitted as part of the network protocol, there would be far less overhead than in the Justice Web. &lt;br /&gt;
&lt;br /&gt;
=== Rules and Judges ===&lt;br /&gt;
&lt;br /&gt;
Similar to the Justice Web, there would need to be a standard set of rules that all hosts agree to. In the global implementation, agreeing upon a standard set of rules might prove to be difficult, since not all hosts/users at the global level have the same views on justice. The problem of judge election also becomes difficult at a global level. We leave this problem to future research. &lt;br /&gt;
&lt;br /&gt;
In summary, the global implementation could offer the same benefits as the Justice Web with much less overhead, but would require a full reboot of the Internet as well a new hardware, making it a realistically unlikely solution.&lt;br /&gt;
&lt;br /&gt;
==Use Cases==&lt;br /&gt;
&lt;br /&gt;
This section reviews three common attacks and describes how the computer-based justice system would deal with them &lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam===&lt;br /&gt;
The first deviant act we investigate is comment spam. This type of spam is typically generated by automated scripts which insert comments on blogs or other sites. Posted comments will generally contain links to other websites which attempt to sell a product or trick the user into revealing banking credentials. Although usually annoying, these comments can direct users to locations where malicious code may be downloaded, even if the original site hosting the comment was initially trusted. &lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
Web servers that allow open comments can adjust the required minimum morality rating to post a comment. Additionally, if comment spam is detected by another user or the site administrator, the message and the ID of the host that posted the comment is reported. This results in a host losing morality rating for posting comment spam which may limit its ability to post in the future. In the global implementation, web servers would reject access to the service altogether from the first initial connection. &lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is accessed by a large number of hosts, or a small number of hosts with high frequency. Services under a denial of service (DoS) or distributed denial of service (DDoS) attack are no longer able to serve legitimate requests [14].&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&lt;br /&gt;
In the Justice Web, the server will have to look up the morality rating for each host that is participating in the attack which may further increase the load on the server. The global implementation would see the morality ratings of each incoming connection and could filter (for e.g., at a firewall level) hosts that have a certain rating. While Dos is generally regarded as a very difficult problem to solve, the morality rating of hosts participating would be lowered, possibly limiting their ability to perform an attack in the future. &lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a malicious site pretends to be a legitimate site, tricking users into revealing banking or personal information.&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
Similar to the comment spam attack, once a phishing site is reported and verified by the judges, the morality rating of the hosting node is lowered.&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9069</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9069"/>
		<updated>2011-04-03T17:15:43Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Use Case Investigation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three primary reasons a host may choose to join a Justice Web:&lt;br /&gt;
&lt;br /&gt;
1) The host is a server seeking to protect itself from malicious traffic by accessing a collective history source hosts&#039; connections.&lt;br /&gt;
&lt;br /&gt;
2) the host is a client seeking to determine how &amp;quot;safe&amp;quot; it would be to access a given server based on that server&#039;s past actions.&lt;br /&gt;
&lt;br /&gt;
3) The host is either a server or client system seeking access to restricted resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a host retains its existing morality rating.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Where should the master morality list be stored?&#039;&#039;&#039; - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;How are judges elected?&#039;&#039;&#039; - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exist. For example, the United Nations (UN) and the North Atlantic Treaty Organization (NATO) are organizations where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation such as the Justice Web, where hosts opt-in. This section briefly discuss a different approach to the Justice Web that attempts to deal with some of the restrictions mentioned above, at the expense of losing incremental deployability.&lt;br /&gt;
&lt;br /&gt;
=== Morality Rating ===&lt;br /&gt;
&lt;br /&gt;
The global implementation still requires the existence of a morality rating, but in a global setting, we require that all hosts have a morality rating built-in. By having each host store its own morality rating, we obsolete the concept of a &amp;quot;master list&amp;quot; or a &amp;quot;slave list&amp;quot; or morality ratings. The obvious requirement for a built-in morality rating is that the host itself should not be able to arbitrarily modify the value. One possible mechanism could be the use of a Trusted Platform (http://www.trustedcomputinggroup.org/developers/ TPM]) which allows encryption and decryption of data, but noes not allow the extraction of the private encryption key. Indeed, storing the morality rating within hosts rather than on external lists alleviates the need for distributed storage and allows better scalability, but also requires all hosts to be compliant with the mechanism. &lt;br /&gt;
&lt;br /&gt;
=== Connection management ===&lt;br /&gt;
&lt;br /&gt;
Due to the modified morality rating storage, there is no longer the need to look-up the morality rating of a host upon incoming connections. We therefore need a way to transmit the morality rating on each outgoing connection, so that the destination host (i.e., the server) can decide whether or not to allow the connection. A change of this type would mean changing underlying networking protocols to include a new field (the morality rating). If morality ratings are stored locally and transmitted as part of the network protocol, there would be far less overhead than in the Justice Web. &lt;br /&gt;
&lt;br /&gt;
=== Rules and Judges ===&lt;br /&gt;
&lt;br /&gt;
Similar to the Justice Web, there would need to be a standard set of rules that all hosts agree to. In the global implementation, agreeing upon a standard set of rules might prove to be difficult, since not all hosts/users at the global level have the same views on justice. The problem of judge election also becomes difficult at a global level. We leave this problem to future research. &lt;br /&gt;
&lt;br /&gt;
In summary, the global implementation could offer the same benefits as the Justice Web with much less overhead, but would require a full reboot of the Internet as well a new hardware, making it a realistically unlikely solution.&lt;br /&gt;
&lt;br /&gt;
==Use Cases==&lt;br /&gt;
&lt;br /&gt;
This section reviews three common attacks and describes how the computer-based justice system would deal with them &lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we investigate is comment spam. This type of spam is typically generated by automated scripts which insert comments on blogs or other sites. Posted comments will generally contain links to other websites which attempt to sell a product or trick the user into revealing banking credentials. Although usually annoying, these comments can direct users to locations where malicious code may be downloaded, even if the original site hosting the comment was initially trusted. &lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
Web servers that allow open comments can adjust the required minimum morality rating to post a comment. Additionally, if comment spam is detected by another user or the site administrator, the message and the ID of the host that posted the comment is reported. This results in a host losing morality rating for posting comment spam which may limit its ability to post in the future. In the global implementation, web servers would reject access to the service altogether from the first initial connection. &lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is accessed by a large number of hosts, or a small number of hosts with high frequency. Services under a denial of service (DoS) or distributed denial of service (DDoS) attack are no longer able to serve legitimate requests [14].&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&lt;br /&gt;
In the Justice Web, the server will have to look up the morality rating for each host that is participating in the attack which may further increase the load on the server. The global implementation would see the morality ratings of each incoming connection and could filter (for e.g., at a firewall level) hosts that have a certain rating. While Dos is generally regarded as a very difficult problem to solve, the morality rating of hosts participating would be lowered, possibly limiting their ability to perform an attack in the future. &lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a malicious site pretends to be a legitimate site, tricking users into revealing banking or personal information.&lt;br /&gt;
&lt;br /&gt;
==== Solution ====&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9068</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9068"/>
		<updated>2011-04-03T17:03:08Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three primary reasons a host may choose to join a Justice Web:&lt;br /&gt;
&lt;br /&gt;
1) The host is a server seeking to protect itself from malicious traffic by accessing a collective history source hosts&#039; connections.&lt;br /&gt;
&lt;br /&gt;
2) the host is a client seeking to determine how &amp;quot;safe&amp;quot; it would be to access a given server based on that server&#039;s past actions.&lt;br /&gt;
&lt;br /&gt;
3) The host is either a server or client system seeking access to restricted resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a host retains its existing morality rating.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Where should the master morality list be stored?&#039;&#039;&#039; - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;How are judges elected?&#039;&#039;&#039; - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exist. For example, the United Nations (UN) and the North Atlantic Treaty Organization (NATO) are organizations where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation such as the Justice Web, where hosts opt-in. This section briefly discuss a different approach to the Justice Web that attempts to deal with some of the restrictions mentioned above, at the expense of losing incremental deployability.&lt;br /&gt;
&lt;br /&gt;
=== Morality Rating ===&lt;br /&gt;
&lt;br /&gt;
The global implementation still requires the existence of a morality rating, but in a global setting, we require that all hosts have a morality rating built-in. By having each host store its own morality rating, we obsolete the concept of a &amp;quot;master list&amp;quot; or a &amp;quot;slave list&amp;quot; or morality ratings. The obvious requirement for a built-in morality rating is that the host itself should not be able to arbitrarily modify the value. One possible mechanism could be the use of a Trusted Platform (http://www.trustedcomputinggroup.org/developers/ TPM]) which allows encryption and decryption of data, but noes not allow the extraction of the private encryption key. Indeed, storing the morality rating within hosts rather than on external lists alleviates the need for distributed storage and allows better scalability, but also requires all hosts to be compliant with the mechanism. &lt;br /&gt;
&lt;br /&gt;
=== Connection management ===&lt;br /&gt;
&lt;br /&gt;
Due to the modified morality rating storage, there is no longer the need to look-up the morality rating of a host upon incoming connections. We therefore need a way to transmit the morality rating on each outgoing connection, so that the destination host (i.e., the server) can decide whether or not to allow the connection. A change of this type would mean changing underlying networking protocols to include a new field (the morality rating). If morality ratings are stored locally and transmitted as part of the network protocol, there would be far less overhead than in the Justice Web. &lt;br /&gt;
&lt;br /&gt;
=== Rules and Judges ===&lt;br /&gt;
&lt;br /&gt;
Similar to the Justice Web, there would need to be a standard set of rules that all hosts agree to. In the global implementation, agreeing upon a standard set of rules might prove to be difficult, since not all hosts/users at the global level have the same views on justice. The problem of judge election also becomes difficult at a global level. We leave this problem to future research. &lt;br /&gt;
&lt;br /&gt;
In summary, the global implementation could offer the same benefits as the Justice Web with much less overhead, but would require a full reboot of the Internet as well a new hardware, making it a realistically unlikely solution.&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9067</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9067"/>
		<updated>2011-04-03T16:29:08Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three primary reasons a host may choose to join a Justice Web:&lt;br /&gt;
&lt;br /&gt;
1) The host is a server seeking to protect itself from malicious traffic by accessing a collective history source hosts&#039; connections.&lt;br /&gt;
&lt;br /&gt;
2) the host is a client seeking to determine how &amp;quot;safe&amp;quot; it would be to access a given server based on that server&#039;s past actions.&lt;br /&gt;
&lt;br /&gt;
3) The host is either a server or client system seeking access to restricted resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a host retains its existing morality rating.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Where should the master morality list be stored?&#039;&#039;&#039; - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;How are judges elected?&#039;&#039;&#039; - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exist. For example, the United Nations (UN) and the North Atlantic Treaty Organization (NATO) are organizations where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation such as the Justice Web, where hosts opt-in. This section briefly discuss a different approach to the Justice Web that attempts to deal with some of the restrictions mentioned above, at the expense of losing incremental deployability.&lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9066</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9066"/>
		<updated>2011-04-02T22:54:39Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three primary reasons a host may choose to join a Justice Web:&lt;br /&gt;
&lt;br /&gt;
1) The host is a server seeking to protect itself from malicious traffic by accessing a collective history source hosts&#039; connections.&lt;br /&gt;
&lt;br /&gt;
2) the host is a client seeking to determine how &amp;quot;safe&amp;quot; it would be to access a given server based on that server&#039;s past actions.&lt;br /&gt;
&lt;br /&gt;
3) The host is either a server or client system seeking access to restricted resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a host retains its existing morality rating.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Where should the master morality list be stored?&#039;&#039;&#039; - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;How are judges elected?&#039;&#039;&#039; - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exists (e.g., United Nations, the North Atlantic Treaty Organization, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation such as the Justice Web, where hosts opt-in. &lt;br /&gt;
&lt;br /&gt;
=== Description ===&lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9065</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9065"/>
		<updated>2011-04-02T22:48:48Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Membership */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three primary reasons a host may choose to join a Justice Web:&lt;br /&gt;
&lt;br /&gt;
1) The host is a server seeking to protect itself from malicious traffic by accessing a collective history source hosts&#039; connections.&lt;br /&gt;
&lt;br /&gt;
2) the host is a client seeking to determine how &amp;quot;safe&amp;quot; it would be to access a given server based on that server&#039;s past actions.&lt;br /&gt;
&lt;br /&gt;
3) The host is either a server or client system seeking access to restricted resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a host retains its existing morality rating.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exists (e.g., United Nations, the North Atlantic Treaty Organization, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation.  &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9064</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9064"/>
		<updated>2011-04-02T22:48:00Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Membership */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three primary reasons a host may choose to join a Justice Web:&lt;br /&gt;
&lt;br /&gt;
1) The host is a server seeking to protect itself from malicious traffic by accessing a collective history connecting hosts&#039; [traffic].&lt;br /&gt;
&lt;br /&gt;
2) the host is a client seeking to determine how &amp;quot;safe&amp;quot; it would be to access a given server based on that server&#039;s past actions.&lt;br /&gt;
&lt;br /&gt;
3) The host is either a server or client system seeking access to restricted resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a host retains its existing morality rating.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exists (e.g., United Nations, the North Atlantic Treaty Organization, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation.  &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9063</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9063"/>
		<updated>2011-04-02T22:46:50Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Global Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
===Overview===&lt;br /&gt;
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term &#039;&#039;Internet&#039;&#039;). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exists (e.g., United Nations, the North Atlantic Treaty Organization, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. Generally in these types of councils, each participating member country appoints one or more people to represent the country&#039;s interests in the council.  &lt;br /&gt;
&lt;br /&gt;
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation.  &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9062</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9062"/>
		<updated>2011-04-02T22:31:26Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Evidence Logs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9061</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9061"/>
		<updated>2011-04-02T21:59:39Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Master List */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9060</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9060"/>
		<updated>2011-04-02T21:59:02Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Master List */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
==Master List==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9059</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9059"/>
		<updated>2011-04-02T21:58:23Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Rules */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9058</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9058"/>
		<updated>2011-04-02T21:58:03Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Rules */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level and type of proof is required for an offender to be convicted, and the severity of the punishment (e.g., how many morality rating points the host will lose). The creation of these rules is by default left up to the judges, but should be agreed upon by all hosts on the network or at the very least be visible to all hosts. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada].&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9057</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9057"/>
		<updated>2011-04-02T21:52:05Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Judges */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the morality rating of the offender based on the severity of the offense. We note that in many instances, the process of reviewing evidence can be automated and would require no manual verification. However, some cases may not have enough evidence, or there may be a reason to justify the offense. In cases like these, the review of a judge becomes necessary. &lt;br /&gt;
&lt;br /&gt;
While the appointment of judges is beyond our scope (the contracts team may be better suited for this task), we suggest that the judge by default be a node in the network with the highest morality rating. This node will generally be owner of the network, but it may vary from one network to another. Another possible way to select judges is a democratic approach, where judges must be elected by the majority of hosts in the network.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level of proof is needed, and the severity of the punishment. The creation of these rules is by default left up to the judges. As stated above, the rules are used by the system during the judgement of claims.&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9056</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9056"/>
		<updated>2011-04-02T21:38:10Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Master List */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the MR of the offender based on the severity of the offense.&lt;br /&gt;
&lt;br /&gt;
A judge is appointed by default as the node in the network with the highest MR (Usually the owner), though this option is implementation specific. Alternatively, a ring of judges is appointed and given the power to lower the MR of other judges, but the evidence must be validated by a majority of the judges. This would be implemented as an attempt to keep the judges in check.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level of proof is needed, and the severity of the punishment. The creation of these rules is by default left up to the judges. As stated above, the rules are used by the system during the judgement of claims.&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9055</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9055"/>
		<updated>2011-04-02T21:27:55Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Master List */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
====Master List====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings hosts in the Justice Web. Because the master list may grow to a large size (as more hosts are added), list storage becomes an important consideration. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could depend on the morality ratings themselves (e.g., entries of hosts with a morality rating of less than 100 could be stored in a slave list, and the rest in a master list). Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the MR of the offender based on the severity of the offense.&lt;br /&gt;
&lt;br /&gt;
A judge is appointed by default as the node in the network with the highest MR (Usually the owner), though this option is implementation specific. Alternatively, a ring of judges is appointed and given the power to lower the MR of other judges, but the evidence must be validated by a majority of the judges. This would be implemented as an attempt to keep the judges in check.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level of proof is needed, and the severity of the punishment. The creation of these rules is by default left up to the judges. As stated above, the rules are used by the system during the judgement of claims.&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9054</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9054"/>
		<updated>2011-04-02T21:27:08Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Resources */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
====Master List====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings of a hosts in the Justice Web. Considering the list may grow to a large size (as more hosts are added), list storage becomes important. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could &lt;br /&gt;
&lt;br /&gt;
which mirrors a subset of the entries in the master list. The subset is dependent on the rules implemented by the server. For instance, if a server decides to deny any connections with an MR: -100, all entries that have MR: -100 would be stored in the slave list.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the MR of the offender based on the severity of the offense.&lt;br /&gt;
&lt;br /&gt;
A judge is appointed by default as the node in the network with the highest MR (Usually the owner), though this option is implementation specific. Alternatively, a ring of judges is appointed and given the power to lower the MR of other judges, but the evidence must be validated by a majority of the judges. This would be implemented as an attempt to keep the judges in check.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level of proof is needed, and the severity of the punishment. The creation of these rules is by default left up to the judges. As stated above, the rules are used by the system during the judgement of claims.&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;br /&gt;
&lt;br /&gt;
[15] C.A. Thekkath, T. Mann, and E.K. Lee, &#039;&#039;Frangipani: A scalable distributed file system&#039;&#039;, in Proceedings of ACM SIGOPS Operating Systems Review 1997.&lt;br /&gt;
&lt;br /&gt;
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, &#039;&#039;The Google File System&#039;&#039;, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9053</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9053"/>
		<updated>2011-04-02T21:00:05Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Master List */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
====Master List====&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;master list&#039;&#039; is a database that stores the morality ratings of a hosts in the Justice Web. Considering the list may grow to a large size (as more hosts are added), list storage becomes important. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a &#039;&#039;slave List&#039;&#039;) being copied to other hosts.  The mirroring logic could &lt;br /&gt;
&lt;br /&gt;
which mirrors a subset of the entries in the master list. The subset is dependent on the rules implemented by the server. For instance, if a server decides to deny any connections with an MR: -100, all entries that have MR: -100 would be stored in the slave list.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the MR of the offender based on the severity of the offense.&lt;br /&gt;
&lt;br /&gt;
A judge is appointed by default as the node in the network with the highest MR (Usually the owner), though this option is implementation specific. Alternatively, a ring of judges is appointed and given the power to lower the MR of other judges, but the evidence must be validated by a majority of the judges. This would be implemented as an attempt to keep the judges in check.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level of proof is needed, and the severity of the punishment. The creation of these rules is by default left up to the judges. As stated above, the rules are used by the system during the judgement of claims.&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9052</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9052"/>
		<updated>2011-04-02T20:48:04Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Morality Rating */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Master List&amp;quot; is a database that stores the morality ratings of a Justice Web, and is maintained by the judges. servers within the network store a &amp;quot;Slave List&amp;quot; which mirrors a subset of the entries in the master list. The subset is dependent on the rules implemented by the server. For instance, if a server decides to deny any connections with an MR: -100, all entries that have MR: -100 would be stored in the slave list.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the MR of the offender based on the severity of the offense.&lt;br /&gt;
&lt;br /&gt;
A judge is appointed by default as the node in the network with the highest MR (Usually the owner), though this option is implementation specific. Alternatively, a ring of judges is appointed and given the power to lower the MR of other judges, but the evidence must be validated by a majority of the judges. This would be implemented as an attempt to keep the judges in check.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level of proof is needed, and the severity of the punishment. The creation of these rules is by default left up to the judges. As stated above, the rules are used by the system during the judgement of claims.&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9051</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9051"/>
		<updated>2011-04-02T20:47:28Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Morality Rating */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
&lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another. &lt;br /&gt;
&lt;br /&gt;
Morality ratings are assigned by Judge(s) or leaders of the network. How judges are elected is beyond our scope, so we defer this task to a different team.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Master List&amp;quot; is a database that stores the morality ratings of a Justice Web, and is maintained by the judges. servers within the network store a &amp;quot;Slave List&amp;quot; which mirrors a subset of the entries in the master list. The subset is dependent on the rules implemented by the server. For instance, if a server decides to deny any connections with an MR: -100, all entries that have MR: -100 would be stored in the slave list.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the MR of the offender based on the severity of the offense.&lt;br /&gt;
&lt;br /&gt;
A judge is appointed by default as the node in the network with the highest MR (Usually the owner), though this option is implementation specific. Alternatively, a ring of judges is appointed and given the power to lower the MR of other judges, but the evidence must be validated by a majority of the judges. This would be implemented as an attempt to keep the judges in check.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level of proof is needed, and the severity of the punishment. The creation of these rules is by default left up to the judges. As stated above, the rules are used by the system during the judgement of claims.&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9050</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9050"/>
		<updated>2011-04-02T20:46:21Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Morality Rating */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
A &#039;&#039;morality rating&#039;&#039; is a numeric value that represents a host&#039;s historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses &amp;quot;points&amp;quot; according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [Observability and Contracts http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract. &lt;br /&gt;
 &lt;br /&gt;
The morality rating determines a user&#039;s ability to access to a service based on that service provider&#039;s rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another. &lt;br /&gt;
&lt;br /&gt;
Morality ratings are assigned by Judge(s) or leaders of the network. How judges are elected is beyond our scope, so we defer this task to a different team.&lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Master List&amp;quot; is a database that stores the morality ratings of a Justice Web, and is maintained by the judges. servers within the network store a &amp;quot;Slave List&amp;quot; which mirrors a subset of the entries in the master list. The subset is dependent on the rules implemented by the server. For instance, if a server decides to deny any connections with an MR: -100, all entries that have MR: -100 would be stored in the slave list.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the MR of the offender based on the severity of the offense.&lt;br /&gt;
&lt;br /&gt;
A judge is appointed by default as the node in the network with the highest MR (Usually the owner), though this option is implementation specific. Alternatively, a ring of judges is appointed and given the power to lower the MR of other judges, but the evidence must be validated by a majority of the judges. This would be implemented as an attempt to keep the judges in check.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level of proof is needed, and the severity of the punishment. The creation of these rules is by default left up to the judges. As stated above, the rules are used by the system during the judgement of claims.&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9049</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9049"/>
		<updated>2011-04-02T20:09:42Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks&#039;s operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
&lt;br /&gt;
In the Justice Web, &amp;quot;Morality Rating&amp;quot; (MR) is assigned by the Judge or Judges of the network. The morality rating determines a user&#039;s access to a server, based on that server&#039;s rulings. For example, a server within the Justice Web could have a rule that any connections below MR: -100 are not allowed to connect. In addition, users within the Justice Web use their MR to gain access to resources publicly available to those within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web, so a computer that has a bad MR in one network might have a good MR in another. &lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Master List&amp;quot; is a database that stores the morality ratings of a Justice Web, and is maintained by the judges. servers within the network store a &amp;quot;Slave List&amp;quot; which mirrors a subset of the entries in the master list. The subset is dependent on the rules implemented by the server. For instance, if a server decides to deny any connections with an MR: -100, all entries that have MR: -100 would be stored in the slave list.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the MR of the offender based on the severity of the offense.&lt;br /&gt;
&lt;br /&gt;
A judge is appointed by default as the node in the network with the highest MR (Usually the owner), though this option is implementation specific. Alternatively, a ring of judges is appointed and given the power to lower the MR of other judges, but the evidence must be validated by a majority of the judges. This would be implemented as an attempt to keep the judges in check.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level of proof is needed, and the severity of the punishment. The creation of these rules is by default left up to the judges. As stated above, the rules are used by the system during the judgement of claims.&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9048</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9048"/>
		<updated>2011-04-02T19:51:38Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Possible Implementations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local Implementation (Justice Web) ==&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in the following sections takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is an incrementally-deployable justice system currently entitled the &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
The Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the leader(s) of the network. It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* All computers attempting a connection to the network must be able to be authentically identified whether they are inside or outside of the Justice Web.&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
&lt;br /&gt;
In the Justice Web, &amp;quot;Morality Rating&amp;quot; (MR) is assigned by the Judge or Judges of the network. The morality rating determines a user&#039;s access to a server, based on that server&#039;s rulings. For example, a server within the Justice Web could have a rule that any connections below MR: -100 are not allowed to connect. In addition, users within the Justice Web use their MR to gain access to resources publicly available to those within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web, so a computer that has a bad MR in one network might have a good MR in another. &lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Master List&amp;quot; is a database that stores the morality ratings of a Justice Web, and is maintained by the judges. servers within the network store a &amp;quot;Slave List&amp;quot; which mirrors a subset of the entries in the master list. The subset is dependent on the rules implemented by the server. For instance, if a server decides to deny any connections with an MR: -100, all entries that have MR: -100 would be stored in the slave list.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the MR of the offender based on the severity of the offense.&lt;br /&gt;
&lt;br /&gt;
A judge is appointed by default as the node in the network with the highest MR (Usually the owner), though this option is implementation specific. Alternatively, a ring of judges is appointed and given the power to lower the MR of other judges, but the evidence must be validated by a majority of the judges. This would be implemented as an attempt to keep the judges in check.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level of proof is needed, and the severity of the punishment. The creation of these rules is by default left up to the judges. As stated above, the rules are used by the system during the judgement of claims.&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9047</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9047"/>
		<updated>2011-04-02T19:47:00Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Possible Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementations=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we have two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so results in negatively impacting the performance or stability of the overall system. &lt;br /&gt;
&lt;br /&gt;
The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Serice Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local &amp;quot;Justice Web&amp;quot; Implementation==&lt;br /&gt;
* provide details of implementation here.&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in the following sections takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is an incrementally-deployable justice system currently entitled the &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
The Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the leader(s) of the network. It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* All computers attempting a connection to the network must be able to be authentically identified whether they are inside or outside of the Justice Web.&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
&lt;br /&gt;
In the Justice Web, &amp;quot;Morality Rating&amp;quot; (MR) is assigned by the Judge or Judges of the network. The morality rating determines a user&#039;s access to a server, based on that server&#039;s rulings. For example, a server within the Justice Web could have a rule that any connections below MR: -100 are not allowed to connect. In addition, users within the Justice Web use their MR to gain access to resources publicly available to those within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web, so a computer that has a bad MR in one network might have a good MR in another. &lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Master List&amp;quot; is a database that stores the morality ratings of a Justice Web, and is maintained by the judges. servers within the network store a &amp;quot;Slave List&amp;quot; which mirrors a subset of the entries in the master list. The subset is dependent on the rules implemented by the server. For instance, if a server decides to deny any connections with an MR: -100, all entries that have MR: -100 would be stored in the slave list.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the MR of the offender based on the severity of the offense.&lt;br /&gt;
&lt;br /&gt;
A judge is appointed by default as the node in the network with the highest MR (Usually the owner), though this option is implementation specific. Alternatively, a ring of judges is appointed and given the power to lower the MR of other judges, but the evidence must be validated by a majority of the judges. This would be implemented as an attempt to keep the judges in check.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level of proof is needed, and the severity of the punishment. The creation of these rules is by default left up to the judges. As stated above, the rules are used by the system during the judgement of claims.&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9014</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=9014"/>
		<updated>2011-03-31T16:43:09Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Global Implementation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Members=&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
=Abstract=&lt;br /&gt;
&lt;br /&gt;
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence. &lt;br /&gt;
&lt;br /&gt;
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Can Justice be Implemented on a Distributed Computing System: Discussion=&lt;br /&gt;
&lt;br /&gt;
In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Theory of Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society. &lt;br /&gt;
&lt;br /&gt;
In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; &#039;&#039;&#039;Teleologic, Retributive,&#039;&#039;&#039; and &#039;&#039;&#039;Teleologic Retributive&#039;&#039;&#039;. [3]&lt;br /&gt;
&lt;br /&gt;
====Teleologic View of Punishment:====&lt;br /&gt;
The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]&lt;br /&gt;
&lt;br /&gt;
Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.&lt;br /&gt;
&lt;br /&gt;
====Retributive View of Punishment:====&lt;br /&gt;
Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]&lt;br /&gt;
&lt;br /&gt;
It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1] &lt;br /&gt;
&lt;br /&gt;
Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system. &lt;br /&gt;
&lt;br /&gt;
For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.&lt;br /&gt;
&lt;br /&gt;
====Teleologic Retributive View of Punishment:====&lt;br /&gt;
This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.&lt;br /&gt;
&lt;br /&gt;
To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Structure of Punishment&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.&lt;br /&gt;
&lt;br /&gt;
====Sovereign Rule:====&lt;br /&gt;
In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]&lt;br /&gt;
 &lt;br /&gt;
In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]&lt;br /&gt;
&lt;br /&gt;
Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system. &lt;br /&gt;
&lt;br /&gt;
====Corporal Punishment. Economic Punishment, and Prison:====&lt;br /&gt;
Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]&lt;br /&gt;
&lt;br /&gt;
Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] &lt;br /&gt;
Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]&lt;br /&gt;
 &lt;br /&gt;
These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Addition Concepts Related to Justice&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
====Morality:====&lt;br /&gt;
&lt;br /&gt;
For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6] &lt;br /&gt;
&lt;br /&gt;
Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
&lt;br /&gt;
If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave &amp;quot;socially”. This would further allow punishment methods based on shame, to be exacted based on how &amp;quot;bad&amp;quot; a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Intent:====&lt;br /&gt;
=====Mens Rea - state of the mind:=====&lt;br /&gt;
&lt;br /&gt;
It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]&lt;br /&gt;
&lt;br /&gt;
For a computer, there is no such thing as &amp;quot;intent&amp;quot;, there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.&lt;br /&gt;
&lt;br /&gt;
=====Computer Fraud and Abuse Act:=====&lt;br /&gt;
&lt;br /&gt;
The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in  1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]&lt;br /&gt;
&lt;br /&gt;
One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
===Applying justice to computers===&lt;br /&gt;
&lt;br /&gt;
The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address?  This type of situation seems similar to how humans may be charged for killing someone, the difference  between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings.&lt;br /&gt;
(*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)&lt;br /&gt;
&lt;br /&gt;
Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human  user using another computer terminal.[13]&lt;br /&gt;
&lt;br /&gt;
Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to &amp;quot;care&amp;quot; for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.&lt;br /&gt;
&lt;br /&gt;
=Possible Implementation=&lt;br /&gt;
Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, our team could not come up with one complete system that would be feasible; instead we have two potential implementations each with there own positives and negatives. Although the implementations have separate modes of operation, they both take a teleologic-retributive approach to punishment; by this we mean punishment is necessary but we will not inflict punishmnets that will negatively affect the performance or stability of the distributed system.&lt;br /&gt;
&lt;br /&gt;
The remainder of this article will detail the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Serice Attacks and Phishing.&lt;br /&gt;
&lt;br /&gt;
==Local &amp;quot;Justice Web&amp;quot; Implementation==&lt;br /&gt;
* provide details of implementation here.&lt;br /&gt;
&lt;br /&gt;
===Overview===&lt;br /&gt;
&lt;br /&gt;
The implementation described in the following sections takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is an incrementally-deployable justice system currently entitled the &amp;quot;Justice Web&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
The Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the leader(s) of the network. It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.&lt;br /&gt;
&lt;br /&gt;
In order to implement the Justice Web, certain assumptions must be made:&lt;br /&gt;
&lt;br /&gt;
* All computers attempting a connection to the network must be able to be authentically identified whether they are inside or outside of the Justice Web.&lt;br /&gt;
* Probably more.&lt;br /&gt;
&lt;br /&gt;
===Morality Rating===&lt;br /&gt;
&lt;br /&gt;
In the Justice Web, &amp;quot;Morality Rating&amp;quot; (MR) is assigned by the Judge or Judges of the network. The morality rating determines a user&#039;s access to a server, based on that server&#039;s rulings. For example, a server within the Justice Web could have a rule that any connections below MR: -100 are not allowed to connect. In addition, users within the Justice Web use their MR to gain access to resources publicly available to those within the network.&lt;br /&gt;
&lt;br /&gt;
Morality ratings are local to each network running the Justice Web, so a computer that has a bad MR in one network might have a good MR in another. &lt;br /&gt;
&lt;br /&gt;
===Master List===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Master List&amp;quot; is a database that stores the morality ratings of a Justice Web, and is maintained by the judges. servers within the network store a &amp;quot;Slave List&amp;quot; which mirrors a subset of the entries in the master list. The subset is dependent on the rules implemented by the server. For instance, if a server decides to deny any connections with an MR: -100, all entries that have MR: -100 would be stored in the slave list.&lt;br /&gt;
&lt;br /&gt;
===Judges===&lt;br /&gt;
&lt;br /&gt;
A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the MR of the offender based on the severity of the offense.&lt;br /&gt;
&lt;br /&gt;
A judge is appointed by default as the node in the network with the highest MR (Usually the owner), though this option is implementation specific. Alternatively, a ring of judges is appointed and given the power to lower the MR of other judges, but the evidence must be validated by a majority of the judges. This would be implemented as an attempt to keep the judges in check.&lt;br /&gt;
&lt;br /&gt;
===Rules===&lt;br /&gt;
&lt;br /&gt;
Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level of proof is needed, and the severity of the punishment. The creation of these rules is by default left up to the judges. As stated above, the rules are used by the system during the judgement of claims.&lt;br /&gt;
&lt;br /&gt;
===Evidence Logs===&lt;br /&gt;
&lt;br /&gt;
Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.&lt;br /&gt;
&lt;br /&gt;
===Membership===&lt;br /&gt;
&lt;br /&gt;
There are three possible reasons for joining a Justice Web. You could either be:&lt;br /&gt;
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.&lt;br /&gt;
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)&lt;br /&gt;
* A server or client looking for access to distributed resources within the network.&lt;br /&gt;
&lt;br /&gt;
When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can&#039;t reset your score just by joining the network.&lt;br /&gt;
&lt;br /&gt;
===Jurisdictions===&lt;br /&gt;
&lt;br /&gt;
As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.&lt;br /&gt;
&lt;br /&gt;
==Global Implementation==&lt;br /&gt;
Extrapolating the concept of the &amp;quot;local justice web&amp;quot; to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:&lt;br /&gt;
&lt;br /&gt;
*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host). &lt;br /&gt;
&lt;br /&gt;
*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called &amp;quot;global councils&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
*Other issues.&lt;br /&gt;
&lt;br /&gt;
To address these issues, we do not believe an incrementally deployable solution is possible. &lt;br /&gt;
&lt;br /&gt;
Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)&lt;br /&gt;
&lt;br /&gt;
Requires a complete reboot of the internet.&lt;br /&gt;
&lt;br /&gt;
Advantages:&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
==Use Case Investigation==&lt;br /&gt;
&lt;br /&gt;
===Case 1: Comment Spam:===&lt;br /&gt;
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted. &lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 2: Denial of Service:===&lt;br /&gt;
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip&#039;s through ip spoofing or a distributed DoS attack which has multiple valid ip&#039;s, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
===Case 3: Phishing:===&lt;br /&gt;
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the &amp;quot;fake site&amp;quot; to exist, the usage of the site is an act that must be addressed by the justice system.&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Local Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;u&amp;gt;&amp;lt;i&amp;gt;Global Implementation Solution:&amp;lt;/i&amp;gt;&amp;lt;/u&amp;gt;====&lt;br /&gt;
* provide details&lt;br /&gt;
&lt;br /&gt;
=Resources=&lt;br /&gt;
[1] Posner, Richard A., &#039;&#039;Retirbution and Related Concepts of Punishment&#039;&#039;, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. [http://www.econ.brown.edu/fac/glenn_loury/louryhomepage/teaching/Ec%20222/posner_punishment.pdf PDF]&lt;br /&gt;
&lt;br /&gt;
[2] Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[3] Ezorsky, Gertrude, &#039;&#039;Philosophical Perspectives on Punishment&#039;&#039;, State University of New York Press, 1972. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=Jba2lFg3KOMC&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+punishment&amp;amp;ots=LuzbsNmWlg&amp;amp;sig=KTvWhaWMxIRslk38tZmucty-jNw#v=onepage&amp;amp;q=concepts%20of%20punishment&amp;amp;f=false HTML] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[4] Hobbes, Thomas, &#039;&#039;The Leviathon&#039;&#039;, first published 1651, republished by Forgotten Books, 2008. [http://books.google.ca/books?id=-Q4nPYeps6MC&amp;amp;pg=PR7&amp;amp;lpg=PR7&amp;amp;dq=the+leviathan&amp;amp;source=bl&amp;amp;ots=_vZA8CpLY0&amp;amp;sig=Wn_1z_Sqsx3-YA2dml2A458M_xU&amp;amp;hl=en&amp;amp;ei=V0R6TZT_D8XwrAGiz8zxBQ&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=9&amp;amp;sqi=2&amp;amp;ved=0CFAQ6AEwCA#v=onepage&amp;amp;q&amp;amp;f=false HTML]&lt;br /&gt;
&lt;br /&gt;
[5] Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[6] Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
&lt;br /&gt;
[7]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part A)&lt;br /&gt;
&lt;br /&gt;
[8]  Haeji Hong, &#039;&#039;Hacking Through the Computer Fraud and Abuse Act&#039;&#039;, originally published in 24 U.C. DAVIS L. REV. 283 (1998), [http://www.lawtechjournal.com/archives/blt/i3-hh.html#N_18_ HTML] (part B)&lt;br /&gt;
&lt;br /&gt;
[9] 928 F. 2d 504 - Court of Appeals, &#039;&#039;US v. Morris&#039;&#039;, 2nd Circuit 1991, [http://scholar.google.com/scholar_case?case=551386241451639668 HTML] (case file)&lt;br /&gt;
&lt;br /&gt;
[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, &#039;&#039;Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE&#039;&#039;, Translated by L. W. King, Paul Halsall March 1998, [http://www.fordham.edu/halsall/ancient/hamcode.html HTML] (Internet History Sourcebook)&lt;br /&gt;
&lt;br /&gt;
[11] Scott D. Sagan, &#039;&#039;Review: History, Analogy, and Deterrence Theory&#039;&#039;, The MIT Press, 1991, [http://www.jstor.org.proxy.library.carleton.ca/stable/204567?&amp;amp;Search=yes&amp;amp;searchText=history%2Canalogy%2CAND&amp;amp;searchText=deterrence&amp;amp;list=hide&amp;amp;searchUri=%2Faction%2FdoBasicSearch%3Facc%3Don%26Query%3Dhistory%252Canalogy%252Cand%2Bdeterrence%26gw%3Djtx%26acc%3Don%26prq%3D204567%26Search%3DSearch%26hp%3D25%26wc%3Don%26acc%3Don&amp;amp;prevSearch=&amp;amp;item=1&amp;amp;ttl=5&amp;amp;returnArticleService=showFullText HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[12]Rollin M. Perkins, &#039;&#039;A Rationale of Mens Rea&#039;&#039;, Harvard Law Review, 1939, [http://www.jstor.org/pss/1334184 HTML] (book link)&lt;br /&gt;
&lt;br /&gt;
[13] Marquis Beccaria, &#039;&#039;Of Crimes and Punishments&#039;&#039;, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, [http://www.constitution.org/cb/crim_pun12.htm HTML] (essay translation)&lt;br /&gt;
&lt;br /&gt;
[14] Roger.M.Needham, &#039;&#039;Denial of Service&#039;&#039;,ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Justice&amp;diff=8747</id>
		<title>Talk:DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Justice&amp;diff=8747"/>
		<updated>2011-03-20T21:06:15Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Denial of Service */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Meetings==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 01&amp;lt;/u&amp;gt;===&lt;br /&gt;
Early discussions on how we would define justice:&lt;br /&gt;
* what are the components of justice?&lt;br /&gt;
* should justice involve preventative measure or should it be strictly reactive?&lt;br /&gt;
&lt;br /&gt;
How would evidence be collected and logged?&lt;br /&gt;
&lt;br /&gt;
Discussions on what &amp;quot;punishment&amp;quot; means when referring to computers:&lt;br /&gt;
* What can we do to punish or penalize computers?&lt;br /&gt;
* Does it make sense to punish computers?&lt;br /&gt;
&lt;br /&gt;
Discussions on how human penal systems work:&lt;br /&gt;
* do we want computer justice to be used to dissuade deviant behaviour or should it be used to punish those who have committed &amp;quot;bad&amp;quot; acts?&lt;br /&gt;
* should we implement a system that catches/punishes all bad acts or just punish reported acts?&lt;br /&gt;
* how will we classify deviant behaviour?&lt;br /&gt;
** by the act itself &lt;br /&gt;
** by the results of the act&lt;br /&gt;
&lt;br /&gt;
Discussed how there would need to be some sort of hierarchical justice system with figure heads who manage justice activities for their specific region:&lt;br /&gt;
* collective internet justice: &amp;lt;b&amp;gt;Justice Web&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;JLA&amp;lt;/b&amp;gt; (Justice Link Assessment)&lt;br /&gt;
* each region patrolled by a justice managing unit:&lt;br /&gt;
** Internet Batman (Gotham), Internet Superman (Metropolis), etc.&lt;br /&gt;
&lt;br /&gt;
Divided the task of finding research papers into 3 sections:&lt;br /&gt;
* current ways to &amp;quot;punish&amp;quot; computers (Matthew)&lt;br /&gt;
* ways to collect, log, categorize evidence of inappropriate behaviour (Thomas)&lt;br /&gt;
* human methods of justice, various penal systems in our current and historical societies (Mike)&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 03&amp;lt;/u&amp;gt;===&lt;br /&gt;
Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:&lt;br /&gt;
* punishing computers is difficult as computers do not care what task they are given, they just complete computations.&lt;br /&gt;
* punishing people is not really the focus we need as that is what human laws are for.&lt;br /&gt;
* if there is some way to punish a computer, does it make sense to punish computers that are being used for &amp;quot;bad&amp;quot; actions if the owner of the computer is unaware of this activity.&lt;br /&gt;
** does this punishment really have a greater effect on the owner of the computer than the computer itself?&lt;br /&gt;
&lt;br /&gt;
Our new focus is to try and narrow down if the concept of justice actually has a place in distributed computing:&lt;br /&gt;
* determine what purpose justice would serve...why would we have it?&lt;br /&gt;
** if we decide justice is a necessary concept, the focus will become what is a &amp;quot;fair&amp;quot; way to apply punishment for &amp;quot;bad&amp;quot; actions.&lt;br /&gt;
** if justice does not have a useful purpose then we must detail the reason that it is not beneficial.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 08&amp;lt;/u&amp;gt;===&lt;br /&gt;
*&#039;&#039;&#039;Definition of Justice&#039;&#039;&#039; - Can we separate the computer punishment from the user punishment?&lt;br /&gt;
*&#039;&#039;&#039;Transparency&#039;&#039;&#039; - keeping &amp;quot;rap sheets&amp;quot; on what systems are doing/have done. If you were wrongfully accused for participating in a malicious attack, this can be clarified&lt;br /&gt;
*&#039;&#039;&#039;Punishment&#039;&#039;&#039; - Computational puzzles for fighting unsolicited inbound traffic. &lt;br /&gt;
*&#039;&#039;&#039;Morality rating&#039;&#039;&#039; - Systems get a &amp;quot;moral rating&amp;quot; that can go up or down. Based on this rating, more or less trust can be given to that system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Capital punishment? &lt;br /&gt;
Financial sanction or imprisonment are our current way of punishment. they&#039;re expensive (maintain databases, keeping state, paying for prisons). &lt;br /&gt;
&lt;br /&gt;
bodily harm - limited time to perform. the fact that they&#039;ve been punished is visible. losing hands, losing eyes, people can see that. information propagates because the authorities make an example of someone. &lt;br /&gt;
&lt;br /&gt;
maybe the solution is to restrict protocols if you have a low morality rating. for e.g., you can restrict encryption and compression, which means anything you do will be publicly visible.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 10&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
* Offender Registration: Global list of morality registered for perusal of other networks.&lt;br /&gt;
* Encrypted logs on client-side&lt;br /&gt;
** Reporting with tangible evidence&lt;br /&gt;
* Compensation for crimes&lt;br /&gt;
* virus notification&lt;br /&gt;
* detrimental to attacker buying a new computer, rather than total prevention&lt;br /&gt;
** assumption that computer can always be identified&lt;br /&gt;
* virus as umbrella group for mobile code&lt;br /&gt;
** active attackers punished differently from passive attackers&lt;br /&gt;
* Research Topics:&lt;br /&gt;
** What is Justice?&lt;br /&gt;
*** Mike&lt;br /&gt;
** Justice in terms of computers.&lt;br /&gt;
*** Matthew&lt;br /&gt;
** Crime and Punishment.&lt;br /&gt;
*** David&lt;br /&gt;
** Justice Web&lt;br /&gt;
*** Thomas McMahon&lt;br /&gt;
&lt;br /&gt;
===Mar 17===&lt;br /&gt;
&lt;br /&gt;
==Research Documentation==&lt;br /&gt;
&lt;br /&gt;
===Virtual Punishment===&lt;br /&gt;
I am currently reading a part of this book for some details on virtual punishment and a bit of history that this guy wrote about, but not sure if there is much there yet. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=KacfpI0zYAUC&amp;amp;oi=fnd&amp;amp;pg=PA206&amp;amp;dq=punishing+computers&amp;amp;ots=YhI8lfMo1F&amp;amp;sig=c7MqOVjR-9QKjj5_ANi0yyxYiAA#v=onepage&amp;amp;q=punishing%20computers&amp;amp;f=false link] --[[User:Mchou2|Mchou2]] 03:29, 3 March 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.101.2723&amp;amp;rep=rep1&amp;amp;type=pdf Responsible Computers?]&lt;br /&gt;
&lt;br /&gt;
===Theory of Justice===&lt;br /&gt;
&lt;br /&gt;
Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* This book provides a view of Justice that may serve the purpose of distributed computing. Rawls describes justice as serving two primary functions;      &lt;br /&gt;
       1. Assign rights and duties for the basic institutions of society.&lt;br /&gt;
       2. Describe the best way to distribute the benefits and burdens of society.&lt;br /&gt;
*If we take this view of justice, as opposed to a penalty-centric view, then justice may have a place in distributed computing. For our purposes, justice could be the basic guidelines to which all members of a distributed society must conform in order for the system to be stable and efficient. Obviously this view is an &amp;quot;all-in&amp;quot; type approach and may be more difficult to describe in terms of being incrementally deployable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===The Birth of Prison===&lt;br /&gt;
Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* Foucault&#039;s book focuses on how punishment evolved from medevil methods &amp;quot;draw and quarter&amp;quot; to modern prison methods. These two methods of justiceare differentiated by the way in which punishment is carried out. For medevil, or &amp;quot;Monarchical Punishment&amp;quot;, the population is discouraged from doing bad acts by the public, and brutal, way that punishment is exacted. The punishments included torture and executions. On the other hand, Foucault discusses &amp;quot;Disciplinary Punishment&amp;quot; where there are people deemed as experts who have power over the perpetrator of a &amp;quot;bad&amp;quot; act and handle the punishment of the individual. An example of this is a prison guard who determines how long a prisoner stays in jail.&lt;br /&gt;
*For a distributed computing system, this provides a couple of ways that justice could be enforced. If we think of the general distributed system as a free zone in which computers can act how they wish but there are laws in place to describe &amp;quot;bad&amp;quot; acts. If a computer is caught and convicted of doing something against the described laws, then the computer could be tortured (forced to provide more resources to other computers), executed (completly removed from the system) or potentially placed under the care of a supervisor computer who will allow the &amp;quot;bad&amp;quot; computer to continue to participate in certain, restricted actions until the professional (supervisor) computer approves of releasing the &amp;quot;bad&amp;quot; computer back to the general system. The supervisor computer may actually be controlled by a human who is trying to resolve the issue on the offending computer.&lt;br /&gt;
* Another concept worth investigating is that of Foucault&#039;s &amp;quot;Panopticon&amp;quot; which is a prison in which everything can be seen. This can also be extended from the strictly prison sense to the level of daily interactions between people and the idea of shame. Most rules are followed because of the knowledge that those around you will see what you have done and their view of you will change, you will have a social stygma. If this is adopted by the computers, through some reputation mechanism, then maybe distributed computing relationships could be formed and altered based on the actions conducted by individual computers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Ecce Homo &amp;amp; The Anarchist===&lt;br /&gt;
Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* If we were going to add shame/stygma to computers, there would need to be some mechanism to manage what is good and what is bad. Nietzsche&#039;s work could provide a basis for this computer moral code as he describes two different forms of morality based on two different social position: &amp;quot;master-morality&amp;quot; and &amp;quot;slave-morality&amp;quot;.&lt;br /&gt;
** Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic.&lt;br /&gt;
** Slave-morality is split on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.&lt;br /&gt;
* Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
*If this morality was tied to the reputation component, then all computers would be able to know how other computers &amp;quot;socially&amp;quot; behave. This would further allow punishment methods, as described in the above Foucault section, to be handed out based on how &amp;quot;bad&amp;quot; a computer is and the affending computer can only be released when it&#039;s morality is deemed appropriate by the supervising (professional) computer.&lt;br /&gt;
&lt;br /&gt;
=== Crime and Punishment ===&lt;br /&gt;
This is just a little placeholder for some thoughts before I post them to the main page. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Limiting capabilities&#039;&#039;&#039;&lt;br /&gt;
Anil mentioned in class the possibility of revoking or limiting capabilities if a user/computer has been found to be guilty of a crime. For example, the computer could somehow lose its ability to perform encryption or secure communications. Somewhat related is the idea of cpu-throttling by performing additional work (explained in the section below). &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Proof-of-Work&#039;&#039;&#039;&lt;br /&gt;
There has been a lot of research done in the area of computational puzzles to fight spam. The idea is that there is currently very little cost associated with sending spam (much less than .01c per email), so we want to make it a bit more &amp;quot;expensive&amp;quot; for spammers to achieve their goal. One solution is to have any email-sending computer perform some type of computational puzzle every time an email is sent. The result of the computation is appended to the email and can be verified by the recipient. One example is to find a string that when hashed gives a result smaller or larger than a specific value. You can statistically predict how long such a computation would take, and you could tweak it to be some particular value (10s, 1m, etc). &lt;br /&gt;
&lt;br /&gt;
I see this as being related to justice, because each self-governing entity can set up these proof-of-work requirements and adjust the difficulty for &amp;quot;trusted&amp;quot; entities and &amp;quot;untrusted&amp;quot; ones. The difficulty can also be increased for entities that misbehave, resulting in a kind of punishment. These punished systems would have to do more computation (e.g., 10m, 1hr) before they&#039;re allowed to communicate with someone else. &lt;br /&gt;
&lt;br /&gt;
I have some ideas on how you could technically do this, which we can discuss in class. And now some links:&lt;br /&gt;
&lt;br /&gt;
[https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6 https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6]&lt;br /&gt;
[http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Unique identifiers&#039;&#039;&#039; &lt;br /&gt;
How are machines identified? Although this problem is related to attribution and there&#039;s another team working on it, we can make some basic assumptions that each machine is identifiable. This identifier should be able to survive a reformat, but buying a new machine would get you a new identifier. We might argue that this is fine, because all we&#039;re trying to do is raise the price an attacker has to pay to commit a crime (i.e., buy more machines).&lt;br /&gt;
&lt;br /&gt;
===Gathering Evidence===&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1&lt;br /&gt;
&lt;br /&gt;
Above is a paper that proposes using statistical data to differentiate between legitimate and illegitimate traffic during a DDoS attack. While the paper proposes the statistics to be used for blocking bad traffic, the same logic can be applied to gathering evidence against the attackers of a DDoS. It gets pretty heavy into the statistical analysis, so it&#039;d probably be better to read the paper than me attempting to explain it. &lt;br /&gt;
Basically, it&#039;s meant to detect a DDoS that is purposefully disguised as a traffic flood. This means that Justice can be properly served to malicious computers as opposed to too many computers wanting your resources.&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01219052&lt;br /&gt;
&lt;br /&gt;
Above is a paper that discusses the idea of computer forensics, and what is needed in order to gather and manage evidence. Although it is meant to be applied to a human level of judgement, computers may be capable of processing this evidence effectively. Logs maintained by routers and local devices may be used as evidence, provided that there be a way to encrypt the data in a way that preserves the original form. It also discusses the challenge of presenting computer related evidence to non-technical jurors, but this is not a concern for computer level management. All that is required for computer forensics to work is additional software being run on select computers to process and preserve any evidence gathered.&lt;br /&gt;
&lt;br /&gt;
===CFAA Computer Fraud and Abuse Act===&lt;br /&gt;
&lt;br /&gt;
In terms of justice, there has been an act that has specifications to what cyber crimes and of what caliber they should be categorized in. One fundamental idea that is followed is the mens rea, which is defined as the &amp;quot;mental state&amp;quot; of a crime. &amp;quot;The Model Penal Code (&amp;quot;MPC&amp;quot;) lists four levels of mens rea -- purposely, knowingly, recklessly, and negligently. The MPC categories range from the highest level, purposely, to the lowest level, negligently. These mens rea levels are further divided into high and low mens rea requirements. The high mens rea levels include acts criminals do intentionally and knowingly. The low mens rea levels include acts criminals do recklessly, negligently, and with strict liability. Criminals have a higher level of mens rea when their intent is more specific; therefore, they are more blameworthy. With these differing mens rea categories in mind, Congress drafted CFAA to address computer crimes occurring on the Internet.&amp;quot;[http://www.lawtechjournal.com/archives/blt/i3-hh.html]&lt;br /&gt;
&lt;br /&gt;
Reading into the decisions they have made to update the CFAA brings up topics of how users who &amp;quot;intentionally&amp;quot; do harm, as well as users unknowingly participating, or even attempting to help the system by hurting it and fixing it ([http://scholar.google.com/scholar_case?case=551386241451639668 Morris]).&lt;br /&gt;
&lt;br /&gt;
Another note is that how there are laws and rules being made for humans to be penalized for such negative cyber actions, but even before penalty, it is important to setup a secure enough system that will try to mitigate such negative actions that can take place.Just as how workers in a business must be educated on detecting malicious software and other vulnerabilities in order to further secure the system. By setting up stand alone protection on each system would prevent the need to punish certain acts since they would be impossible to occur. [http://www.witsa.org/papers/McConnell-cybercrime.pdf Law is only part of the answer]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Concept: Justice Web==&lt;br /&gt;
&lt;br /&gt;
The Justice Web is a possible implementation that uses the research we have done so far. It is essentially an incrementally-deployable network system that shares resources with users within the same Justice Web, based on a morality rating. Evidence is logged so that those within the Web may be held accountable, and those without may be recorded and watched for future misbehaviour.&lt;br /&gt;
&lt;br /&gt;
===What it is===&lt;br /&gt;
&lt;br /&gt;
The Justice Web is an implementation that treats a network as a distributed system. Resources are shared among the users, based on some measure of trust. As the web grows, more computers become linked to each other within the web, making it harder to manage trust given to each member of the Web. Also, the Web should provide some shared protection for those within the network against external attacks.&lt;br /&gt;
&lt;br /&gt;
Because of this, some sort of Justice System is needed to process evidence and sentence malicious computers. The Justice Web would need a computer or computers to act as the judge. After the judgement, the Justice Web would then need to enforce a penalty on the offender.&lt;br /&gt;
&lt;br /&gt;
===What it does===&lt;br /&gt;
&lt;br /&gt;
The Justice Web links multiple computers together to act as a distributed system. the amount of resources allotted to a member is dependent on their moral rating and trust. The most trusted computer would possibly be the leader of the Web, acting as the judge. To gather evidence a log is kept at each implementation node. This evidence is encrypted so that the user cannot tamper with it. &lt;br /&gt;
&lt;br /&gt;
The evidence is collected by the Justice Web, and handled by members with high level of trust. That is to say, the high computers within the system would essentially be able to define how much evidence is needed, as well as what punishment is to be handed out. The power high members are not absolute, but are capable of influencing a standard set of rules. Rulings are handled using common law, with a punishment handled in the same way as previous ones, unless explicitly changed by the high members. &lt;br /&gt;
&lt;br /&gt;
After the evidence is processed, and a ruling is made by the high members, the Justice Web must then enforce the punishment. For threats coming from outside the Web, each member of the Web is warned about the offender. Continued communication with the offender will be allowed, but if an infection does occur, the punishment for becoming infected would be more severe.&lt;br /&gt;
&lt;br /&gt;
As for offenders within the system, the morality rating attached to that member is affected, and the amount of trust is decreased. From a practical standpoint, the punishment would involve the restriction of resources accessible to the member, while increasing the workload of the member. The amount of trust increases over time, allowing the member to slowly gain more and more access to resources, but the morality rating would be kept the same so that it is made aware that the member has done wrong in the past.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.computer.org/portal/web/csdl/doi/10.1109/ICDCS.2006.78 Prevent DOS by preventing spoofing]&lt;br /&gt;
&lt;br /&gt;
Also looking at Ingress filtering is also another good method to prevent users on a network from spoofing ips for DOS attacks. [http://delivery.acm.org.proxy.library.carleton.ca/10.1145/350000/347560/p295-savage.pdf?key1=347560&amp;amp;key2=5289230031&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;ip=134.117.10.200&amp;amp;CFID=14033754&amp;amp;CFTOKEN=82498533 link]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Implementation Section Notes==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;All computers have a unique ID&#039;&#039;&#039; - &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There exists some form of morality reputation that is public knowledge.&#039;&#039;&#039; - It should be clear to everyone how you can lose morality. There should also be a clear process to follow in case of disputes. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website administrators will limit site usage based on the morality/reputation rating.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Morality history.&#039;&#039;&#039; - The current morality value is useful for quick validation. More complicated scenarios (e.g., buying something from ebay) might merit a more detailed explanation of why a computer has a particular morality value, so it should be possible to see the history. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website administrators will determine what restrictions are imposed on a website based on morality ratings. Different sites may have different penalties based on their own rules.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Comment Spam===&lt;br /&gt;
&amp;lt;u&amp;gt;Evidence&amp;lt;/u&amp;gt;&lt;br /&gt;
*Need to detect the bots and the origin of the spam.&lt;br /&gt;
*should include:&lt;br /&gt;
**the comment itself.&lt;br /&gt;
**a link to the page the comment exists on.&lt;br /&gt;
**the unique ID of the perpetrator.&lt;br /&gt;
**justification of why it is spam.&lt;br /&gt;
&amp;lt;u&amp;gt;investigating computer&amp;lt;/u&amp;gt;&lt;br /&gt;
*has access to the transaction/communication data of the perpetrating computer.&lt;br /&gt;
*compares the reported message to other ommunications to detect if spam has occurred.&lt;br /&gt;
&amp;lt;u&amp;gt;Currently deployed solutions&amp;lt;/u&amp;gt;&lt;br /&gt;
*CAPTCHAS - try to detect if the comment submission came from a human or a bot. &lt;br /&gt;
*Filtering - scan for and block specific keywords (pharmaceutical terms, porn terms, etc)&lt;br /&gt;
*Rate limiting - only allow N comments in X time from the same source.&lt;br /&gt;
&lt;br /&gt;
===Denial of Service===&lt;br /&gt;
Use the unique ID to trace back all traffic from the DoS attack to originating machines.&lt;br /&gt;
*if a certain percent of the traffic originates from a single ID (say 60%), then a DoS has occurred.&lt;br /&gt;
*only the computer conducting the DoS is penalized.&lt;br /&gt;
&amp;lt;u&amp;gt;Current solutions&amp;lt;/u&amp;gt;&lt;br /&gt;
*Blacklists - Don&#039;t allow incoming connections from specific IP addresses&lt;br /&gt;
*Routing/configuration - Ingress filtering, ACL, firewalls, syn-cookies&lt;br /&gt;
*Dynamic over-provisioning - When you detect a surge in traffic (legitimate or attack), increase the amount of bandwidth available. &lt;br /&gt;
*Null routing - Refuse to route traffic being sent to the victim for a period of time (upstream)&lt;br /&gt;
&lt;br /&gt;
===Phishing===&lt;br /&gt;
Send the original website link as well as the phishing site link in the report so that the investigating computer can compare.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Justice&amp;diff=8741</id>
		<title>Talk:DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Justice&amp;diff=8741"/>
		<updated>2011-03-20T21:00:08Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Implementation Section Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Meetings==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 01&amp;lt;/u&amp;gt;===&lt;br /&gt;
Early discussions on how we would define justice:&lt;br /&gt;
* what are the components of justice?&lt;br /&gt;
* should justice involve preventative measure or should it be strictly reactive?&lt;br /&gt;
&lt;br /&gt;
How would evidence be collected and logged?&lt;br /&gt;
&lt;br /&gt;
Discussions on what &amp;quot;punishment&amp;quot; means when referring to computers:&lt;br /&gt;
* What can we do to punish or penalize computers?&lt;br /&gt;
* Does it make sense to punish computers?&lt;br /&gt;
&lt;br /&gt;
Discussions on how human penal systems work:&lt;br /&gt;
* do we want computer justice to be used to dissuade deviant behaviour or should it be used to punish those who have committed &amp;quot;bad&amp;quot; acts?&lt;br /&gt;
* should we implement a system that catches/punishes all bad acts or just punish reported acts?&lt;br /&gt;
* how will we classify deviant behaviour?&lt;br /&gt;
** by the act itself &lt;br /&gt;
** by the results of the act&lt;br /&gt;
&lt;br /&gt;
Discussed how there would need to be some sort of hierarchical justice system with figure heads who manage justice activities for their specific region:&lt;br /&gt;
* collective internet justice: &amp;lt;b&amp;gt;Justice Web&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;JLA&amp;lt;/b&amp;gt; (Justice Link Assessment)&lt;br /&gt;
* each region patrolled by a justice managing unit:&lt;br /&gt;
** Internet Batman (Gotham), Internet Superman (Metropolis), etc.&lt;br /&gt;
&lt;br /&gt;
Divided the task of finding research papers into 3 sections:&lt;br /&gt;
* current ways to &amp;quot;punish&amp;quot; computers (Matthew)&lt;br /&gt;
* ways to collect, log, categorize evidence of inappropriate behaviour (Thomas)&lt;br /&gt;
* human methods of justice, various penal systems in our current and historical societies (Mike)&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 03&amp;lt;/u&amp;gt;===&lt;br /&gt;
Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:&lt;br /&gt;
* punishing computers is difficult as computers do not care what task they are given, they just complete computations.&lt;br /&gt;
* punishing people is not really the focus we need as that is what human laws are for.&lt;br /&gt;
* if there is some way to punish a computer, does it make sense to punish computers that are being used for &amp;quot;bad&amp;quot; actions if the owner of the computer is unaware of this activity.&lt;br /&gt;
** does this punishment really have a greater effect on the owner of the computer than the computer itself?&lt;br /&gt;
&lt;br /&gt;
Our new focus is to try and narrow down if the concept of justice actually has a place in distributed computing:&lt;br /&gt;
* determine what purpose justice would serve...why would we have it?&lt;br /&gt;
** if we decide justice is a necessary concept, the focus will become what is a &amp;quot;fair&amp;quot; way to apply punishment for &amp;quot;bad&amp;quot; actions.&lt;br /&gt;
** if justice does not have a useful purpose then we must detail the reason that it is not beneficial.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 08&amp;lt;/u&amp;gt;===&lt;br /&gt;
*&#039;&#039;&#039;Definition of Justice&#039;&#039;&#039; - Can we separate the computer punishment from the user punishment?&lt;br /&gt;
*&#039;&#039;&#039;Transparency&#039;&#039;&#039; - keeping &amp;quot;rap sheets&amp;quot; on what systems are doing/have done. If you were wrongfully accused for participating in a malicious attack, this can be clarified&lt;br /&gt;
*&#039;&#039;&#039;Punishment&#039;&#039;&#039; - Computational puzzles for fighting unsolicited inbound traffic. &lt;br /&gt;
*&#039;&#039;&#039;Morality rating&#039;&#039;&#039; - Systems get a &amp;quot;moral rating&amp;quot; that can go up or down. Based on this rating, more or less trust can be given to that system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Capital punishment? &lt;br /&gt;
Financial sanction or imprisonment are our current way of punishment. they&#039;re expensive (maintain databases, keeping state, paying for prisons). &lt;br /&gt;
&lt;br /&gt;
bodily harm - limited time to perform. the fact that they&#039;ve been punished is visible. losing hands, losing eyes, people can see that. information propagates because the authorities make an example of someone. &lt;br /&gt;
&lt;br /&gt;
maybe the solution is to restrict protocols if you have a low morality rating. for e.g., you can restrict encryption and compression, which means anything you do will be publicly visible.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 10&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
* Offender Registration: Global list of morality registered for perusal of other networks.&lt;br /&gt;
* Encrypted logs on client-side&lt;br /&gt;
** Reporting with tangible evidence&lt;br /&gt;
* Compensation for crimes&lt;br /&gt;
* virus notification&lt;br /&gt;
* detrimental to attacker buying a new computer, rather than total prevention&lt;br /&gt;
** assumption that computer can always be identified&lt;br /&gt;
* virus as umbrella group for mobile code&lt;br /&gt;
** active attackers punished differently from passive attackers&lt;br /&gt;
* Research Topics:&lt;br /&gt;
** What is Justice?&lt;br /&gt;
*** Mike&lt;br /&gt;
** Justice in terms of computers.&lt;br /&gt;
*** Matthew&lt;br /&gt;
** Crime and Punishment.&lt;br /&gt;
*** David&lt;br /&gt;
** Justice Web&lt;br /&gt;
*** Thomas McMahon&lt;br /&gt;
&lt;br /&gt;
===Mar 17===&lt;br /&gt;
&lt;br /&gt;
==Research Documentation==&lt;br /&gt;
&lt;br /&gt;
===Virtual Punishment===&lt;br /&gt;
I am currently reading a part of this book for some details on virtual punishment and a bit of history that this guy wrote about, but not sure if there is much there yet. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=KacfpI0zYAUC&amp;amp;oi=fnd&amp;amp;pg=PA206&amp;amp;dq=punishing+computers&amp;amp;ots=YhI8lfMo1F&amp;amp;sig=c7MqOVjR-9QKjj5_ANi0yyxYiAA#v=onepage&amp;amp;q=punishing%20computers&amp;amp;f=false link] --[[User:Mchou2|Mchou2]] 03:29, 3 March 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.101.2723&amp;amp;rep=rep1&amp;amp;type=pdf Responsible Computers?]&lt;br /&gt;
&lt;br /&gt;
===Theory of Justice===&lt;br /&gt;
&lt;br /&gt;
Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* This book provides a view of Justice that may serve the purpose of distributed computing. Rawls describes justice as serving two primary functions;      &lt;br /&gt;
       1. Assign rights and duties for the basic institutions of society.&lt;br /&gt;
       2. Describe the best way to distribute the benefits and burdens of society.&lt;br /&gt;
*If we take this view of justice, as opposed to a penalty-centric view, then justice may have a place in distributed computing. For our purposes, justice could be the basic guidelines to which all members of a distributed society must conform in order for the system to be stable and efficient. Obviously this view is an &amp;quot;all-in&amp;quot; type approach and may be more difficult to describe in terms of being incrementally deployable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===The Birth of Prison===&lt;br /&gt;
Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* Foucault&#039;s book focuses on how punishment evolved from medevil methods &amp;quot;draw and quarter&amp;quot; to modern prison methods. These two methods of justiceare differentiated by the way in which punishment is carried out. For medevil, or &amp;quot;Monarchical Punishment&amp;quot;, the population is discouraged from doing bad acts by the public, and brutal, way that punishment is exacted. The punishments included torture and executions. On the other hand, Foucault discusses &amp;quot;Disciplinary Punishment&amp;quot; where there are people deemed as experts who have power over the perpetrator of a &amp;quot;bad&amp;quot; act and handle the punishment of the individual. An example of this is a prison guard who determines how long a prisoner stays in jail.&lt;br /&gt;
*For a distributed computing system, this provides a couple of ways that justice could be enforced. If we think of the general distributed system as a free zone in which computers can act how they wish but there are laws in place to describe &amp;quot;bad&amp;quot; acts. If a computer is caught and convicted of doing something against the described laws, then the computer could be tortured (forced to provide more resources to other computers), executed (completly removed from the system) or potentially placed under the care of a supervisor computer who will allow the &amp;quot;bad&amp;quot; computer to continue to participate in certain, restricted actions until the professional (supervisor) computer approves of releasing the &amp;quot;bad&amp;quot; computer back to the general system. The supervisor computer may actually be controlled by a human who is trying to resolve the issue on the offending computer.&lt;br /&gt;
* Another concept worth investigating is that of Foucault&#039;s &amp;quot;Panopticon&amp;quot; which is a prison in which everything can be seen. This can also be extended from the strictly prison sense to the level of daily interactions between people and the idea of shame. Most rules are followed because of the knowledge that those around you will see what you have done and their view of you will change, you will have a social stygma. If this is adopted by the computers, through some reputation mechanism, then maybe distributed computing relationships could be formed and altered based on the actions conducted by individual computers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Ecce Homo &amp;amp; The Anarchist===&lt;br /&gt;
Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* If we were going to add shame/stygma to computers, there would need to be some mechanism to manage what is good and what is bad. Nietzsche&#039;s work could provide a basis for this computer moral code as he describes two different forms of morality based on two different social position: &amp;quot;master-morality&amp;quot; and &amp;quot;slave-morality&amp;quot;.&lt;br /&gt;
** Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic.&lt;br /&gt;
** Slave-morality is split on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.&lt;br /&gt;
* Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
*If this morality was tied to the reputation component, then all computers would be able to know how other computers &amp;quot;socially&amp;quot; behave. This would further allow punishment methods, as described in the above Foucault section, to be handed out based on how &amp;quot;bad&amp;quot; a computer is and the affending computer can only be released when it&#039;s morality is deemed appropriate by the supervising (professional) computer.&lt;br /&gt;
&lt;br /&gt;
=== Crime and Punishment ===&lt;br /&gt;
This is just a little placeholder for some thoughts before I post them to the main page. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Limiting capabilities&#039;&#039;&#039;&lt;br /&gt;
Anil mentioned in class the possibility of revoking or limiting capabilities if a user/computer has been found to be guilty of a crime. For example, the computer could somehow lose its ability to perform encryption or secure communications. Somewhat related is the idea of cpu-throttling by performing additional work (explained in the section below). &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Proof-of-Work&#039;&#039;&#039;&lt;br /&gt;
There has been a lot of research done in the area of computational puzzles to fight spam. The idea is that there is currently very little cost associated with sending spam (much less than .01c per email), so we want to make it a bit more &amp;quot;expensive&amp;quot; for spammers to achieve their goal. One solution is to have any email-sending computer perform some type of computational puzzle every time an email is sent. The result of the computation is appended to the email and can be verified by the recipient. One example is to find a string that when hashed gives a result smaller or larger than a specific value. You can statistically predict how long such a computation would take, and you could tweak it to be some particular value (10s, 1m, etc). &lt;br /&gt;
&lt;br /&gt;
I see this as being related to justice, because each self-governing entity can set up these proof-of-work requirements and adjust the difficulty for &amp;quot;trusted&amp;quot; entities and &amp;quot;untrusted&amp;quot; ones. The difficulty can also be increased for entities that misbehave, resulting in a kind of punishment. These punished systems would have to do more computation (e.g., 10m, 1hr) before they&#039;re allowed to communicate with someone else. &lt;br /&gt;
&lt;br /&gt;
I have some ideas on how you could technically do this, which we can discuss in class. And now some links:&lt;br /&gt;
&lt;br /&gt;
[https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6 https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6]&lt;br /&gt;
[http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Unique identifiers&#039;&#039;&#039; &lt;br /&gt;
How are machines identified? Although this problem is related to attribution and there&#039;s another team working on it, we can make some basic assumptions that each machine is identifiable. This identifier should be able to survive a reformat, but buying a new machine would get you a new identifier. We might argue that this is fine, because all we&#039;re trying to do is raise the price an attacker has to pay to commit a crime (i.e., buy more machines).&lt;br /&gt;
&lt;br /&gt;
===Gathering Evidence===&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1&lt;br /&gt;
&lt;br /&gt;
Above is a paper that proposes using statistical data to differentiate between legitimate and illegitimate traffic during a DDoS attack. While the paper proposes the statistics to be used for blocking bad traffic, the same logic can be applied to gathering evidence against the attackers of a DDoS. It gets pretty heavy into the statistical analysis, so it&#039;d probably be better to read the paper than me attempting to explain it. &lt;br /&gt;
Basically, it&#039;s meant to detect a DDoS that is purposefully disguised as a traffic flood. This means that Justice can be properly served to malicious computers as opposed to too many computers wanting your resources.&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01219052&lt;br /&gt;
&lt;br /&gt;
Above is a paper that discusses the idea of computer forensics, and what is needed in order to gather and manage evidence. Although it is meant to be applied to a human level of judgement, computers may be capable of processing this evidence effectively. Logs maintained by routers and local devices may be used as evidence, provided that there be a way to encrypt the data in a way that preserves the original form. It also discusses the challenge of presenting computer related evidence to non-technical jurors, but this is not a concern for computer level management. All that is required for computer forensics to work is additional software being run on select computers to process and preserve any evidence gathered.&lt;br /&gt;
&lt;br /&gt;
===CFAA Computer Fraud and Abuse Act===&lt;br /&gt;
&lt;br /&gt;
In terms of justice, there has been an act that has specifications to what cyber crimes and of what caliber they should be categorized in. One fundamental idea that is followed is the mens rea, which is defined as the &amp;quot;mental state&amp;quot; of a crime. &amp;quot;The Model Penal Code (&amp;quot;MPC&amp;quot;) lists four levels of mens rea -- purposely, knowingly, recklessly, and negligently. The MPC categories range from the highest level, purposely, to the lowest level, negligently. These mens rea levels are further divided into high and low mens rea requirements. The high mens rea levels include acts criminals do intentionally and knowingly. The low mens rea levels include acts criminals do recklessly, negligently, and with strict liability. Criminals have a higher level of mens rea when their intent is more specific; therefore, they are more blameworthy. With these differing mens rea categories in mind, Congress drafted CFAA to address computer crimes occurring on the Internet.&amp;quot;[http://www.lawtechjournal.com/archives/blt/i3-hh.html]&lt;br /&gt;
&lt;br /&gt;
Reading into the decisions they have made to update the CFAA brings up topics of how users who &amp;quot;intentionally&amp;quot; do harm, as well as users unknowingly participating, or even attempting to help the system by hurting it and fixing it ([http://scholar.google.com/scholar_case?case=551386241451639668 Morris]).&lt;br /&gt;
&lt;br /&gt;
Another note is that how there are laws and rules being made for humans to be penalized for such negative cyber actions, but even before penalty, it is important to setup a secure enough system that will try to mitigate such negative actions that can take place.Just as how workers in a business must be educated on detecting malicious software and other vulnerabilities in order to further secure the system. By setting up stand alone protection on each system would prevent the need to punish certain acts since they would be impossible to occur. [http://www.witsa.org/papers/McConnell-cybercrime.pdf Law is only part of the answer]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Concept: Justice Web==&lt;br /&gt;
&lt;br /&gt;
The Justice Web is a possible implementation that uses the research we have done so far. It is essentially an incrementally-deployable network system that shares resources with users within the same Justice Web, based on a morality rating. Evidence is logged so that those within the Web may be held accountable, and those without may be recorded and watched for future misbehaviour.&lt;br /&gt;
&lt;br /&gt;
===What it is===&lt;br /&gt;
&lt;br /&gt;
The Justice Web is an implementation that treats a network as a distributed system. Resources are shared among the users, based on some measure of trust. As the web grows, more computers become linked to each other within the web, making it harder to manage trust given to each member of the Web. Also, the Web should provide some shared protection for those within the network against external attacks.&lt;br /&gt;
&lt;br /&gt;
Because of this, some sort of Justice System is needed to process evidence and sentence malicious computers. The Justice Web would need a computer or computers to act as the judge. After the judgement, the Justice Web would then need to enforce a penalty on the offender.&lt;br /&gt;
&lt;br /&gt;
===What it does===&lt;br /&gt;
&lt;br /&gt;
The Justice Web links multiple computers together to act as a distributed system. the amount of resources allotted to a member is dependent on their moral rating and trust. The most trusted computer would possibly be the leader of the Web, acting as the judge. To gather evidence a log is kept at each implementation node. This evidence is encrypted so that the user cannot tamper with it. &lt;br /&gt;
&lt;br /&gt;
The evidence is collected by the Justice Web, and handled by members with high level of trust. That is to say, the high computers within the system would essentially be able to define how much evidence is needed, as well as what punishment is to be handed out. The power high members are not absolute, but are capable of influencing a standard set of rules. Rulings are handled using common law, with a punishment handled in the same way as previous ones, unless explicitly changed by the high members. &lt;br /&gt;
&lt;br /&gt;
After the evidence is processed, and a ruling is made by the high members, the Justice Web must then enforce the punishment. For threats coming from outside the Web, each member of the Web is warned about the offender. Continued communication with the offender will be allowed, but if an infection does occur, the punishment for becoming infected would be more severe.&lt;br /&gt;
&lt;br /&gt;
As for offenders within the system, the morality rating attached to that member is affected, and the amount of trust is decreased. From a practical standpoint, the punishment would involve the restriction of resources accessible to the member, while increasing the workload of the member. The amount of trust increases over time, allowing the member to slowly gain more and more access to resources, but the morality rating would be kept the same so that it is made aware that the member has done wrong in the past.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.computer.org/portal/web/csdl/doi/10.1109/ICDCS.2006.78 Prevent DOS by preventing spoofing]&lt;br /&gt;
&lt;br /&gt;
Also looking at Ingress filtering is also another good method to prevent users on a network from spoofing ips for DOS attacks. [http://delivery.acm.org.proxy.library.carleton.ca/10.1145/350000/347560/p295-savage.pdf?key1=347560&amp;amp;key2=5289230031&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;ip=134.117.10.200&amp;amp;CFID=14033754&amp;amp;CFTOKEN=82498533 link]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Implementation Section Notes==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;All computers have a unique ID&#039;&#039;&#039; - &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There exists some form of morality reputation that is public knowledge.&#039;&#039;&#039; - It should be clear to everyone how you can lose morality. There should also be a clear process to follow in case of disputes. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website administrators will limit site usage based on the morality/reputation rating.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Morality history.&#039;&#039;&#039; - The current morality value is useful for quick validation. More complicated scenarios (e.g., buying something from ebay) might merit a more detailed explanation of why a computer has a particular morality value, so it should be possible to see the history. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website administrators will determine what restrictions are imposed on a website based on morality ratings. Different sites may have different penalties based on their own rules.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Comment Spam===&lt;br /&gt;
&amp;lt;u&amp;gt;Evidence&amp;lt;/u&amp;gt;&lt;br /&gt;
*Need to detect the bots and the origin of the spam.&lt;br /&gt;
*should include:&lt;br /&gt;
**the comment itself.&lt;br /&gt;
**a link to the page the comment exists on.&lt;br /&gt;
**the unique ID of the perpetrator.&lt;br /&gt;
**justification of why it is spam.&lt;br /&gt;
&amp;lt;u&amp;gt;investigating computer&amp;lt;/u&amp;gt;&lt;br /&gt;
*has access to the transaction/communication data of the perpetrating computer.&lt;br /&gt;
*compares the reported message to other ommunications to detect if spam has occurred.&lt;br /&gt;
&amp;lt;u&amp;gt;Currently deployed solutions&amp;lt;/u&amp;gt;&lt;br /&gt;
*CAPTCHAS - try to detect if the comment submission came from a human or a bot. &lt;br /&gt;
*Filtering - scan for and block specific keywords (pharmaceutical terms, porn terms, etc)&lt;br /&gt;
*Rate limiting - only allow N comments in X time from the same source.&lt;br /&gt;
&lt;br /&gt;
===Denial of Service===&lt;br /&gt;
Use the unique ID to trace back all traffic from the DoS attack to originating machines.&lt;br /&gt;
*if a certain percent of the traffic originates from a single ID (say 60%), then a DoS has occurred.&lt;br /&gt;
*only the computer conducting the DoS is penalized.&lt;br /&gt;
&lt;br /&gt;
===Phishing===&lt;br /&gt;
Send the original website link as well as the phishing site link in the report so that the investigating computer can compare.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Justice&amp;diff=8740</id>
		<title>Talk:DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Justice&amp;diff=8740"/>
		<updated>2011-03-20T20:59:11Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Implementation Section Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Meetings==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 01&amp;lt;/u&amp;gt;===&lt;br /&gt;
Early discussions on how we would define justice:&lt;br /&gt;
* what are the components of justice?&lt;br /&gt;
* should justice involve preventative measure or should it be strictly reactive?&lt;br /&gt;
&lt;br /&gt;
How would evidence be collected and logged?&lt;br /&gt;
&lt;br /&gt;
Discussions on what &amp;quot;punishment&amp;quot; means when referring to computers:&lt;br /&gt;
* What can we do to punish or penalize computers?&lt;br /&gt;
* Does it make sense to punish computers?&lt;br /&gt;
&lt;br /&gt;
Discussions on how human penal systems work:&lt;br /&gt;
* do we want computer justice to be used to dissuade deviant behaviour or should it be used to punish those who have committed &amp;quot;bad&amp;quot; acts?&lt;br /&gt;
* should we implement a system that catches/punishes all bad acts or just punish reported acts?&lt;br /&gt;
* how will we classify deviant behaviour?&lt;br /&gt;
** by the act itself &lt;br /&gt;
** by the results of the act&lt;br /&gt;
&lt;br /&gt;
Discussed how there would need to be some sort of hierarchical justice system with figure heads who manage justice activities for their specific region:&lt;br /&gt;
* collective internet justice: &amp;lt;b&amp;gt;Justice Web&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;JLA&amp;lt;/b&amp;gt; (Justice Link Assessment)&lt;br /&gt;
* each region patrolled by a justice managing unit:&lt;br /&gt;
** Internet Batman (Gotham), Internet Superman (Metropolis), etc.&lt;br /&gt;
&lt;br /&gt;
Divided the task of finding research papers into 3 sections:&lt;br /&gt;
* current ways to &amp;quot;punish&amp;quot; computers (Matthew)&lt;br /&gt;
* ways to collect, log, categorize evidence of inappropriate behaviour (Thomas)&lt;br /&gt;
* human methods of justice, various penal systems in our current and historical societies (Mike)&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 03&amp;lt;/u&amp;gt;===&lt;br /&gt;
Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:&lt;br /&gt;
* punishing computers is difficult as computers do not care what task they are given, they just complete computations.&lt;br /&gt;
* punishing people is not really the focus we need as that is what human laws are for.&lt;br /&gt;
* if there is some way to punish a computer, does it make sense to punish computers that are being used for &amp;quot;bad&amp;quot; actions if the owner of the computer is unaware of this activity.&lt;br /&gt;
** does this punishment really have a greater effect on the owner of the computer than the computer itself?&lt;br /&gt;
&lt;br /&gt;
Our new focus is to try and narrow down if the concept of justice actually has a place in distributed computing:&lt;br /&gt;
* determine what purpose justice would serve...why would we have it?&lt;br /&gt;
** if we decide justice is a necessary concept, the focus will become what is a &amp;quot;fair&amp;quot; way to apply punishment for &amp;quot;bad&amp;quot; actions.&lt;br /&gt;
** if justice does not have a useful purpose then we must detail the reason that it is not beneficial.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 08&amp;lt;/u&amp;gt;===&lt;br /&gt;
*&#039;&#039;&#039;Definition of Justice&#039;&#039;&#039; - Can we separate the computer punishment from the user punishment?&lt;br /&gt;
*&#039;&#039;&#039;Transparency&#039;&#039;&#039; - keeping &amp;quot;rap sheets&amp;quot; on what systems are doing/have done. If you were wrongfully accused for participating in a malicious attack, this can be clarified&lt;br /&gt;
*&#039;&#039;&#039;Punishment&#039;&#039;&#039; - Computational puzzles for fighting unsolicited inbound traffic. &lt;br /&gt;
*&#039;&#039;&#039;Morality rating&#039;&#039;&#039; - Systems get a &amp;quot;moral rating&amp;quot; that can go up or down. Based on this rating, more or less trust can be given to that system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Capital punishment? &lt;br /&gt;
Financial sanction or imprisonment are our current way of punishment. they&#039;re expensive (maintain databases, keeping state, paying for prisons). &lt;br /&gt;
&lt;br /&gt;
bodily harm - limited time to perform. the fact that they&#039;ve been punished is visible. losing hands, losing eyes, people can see that. information propagates because the authorities make an example of someone. &lt;br /&gt;
&lt;br /&gt;
maybe the solution is to restrict protocols if you have a low morality rating. for e.g., you can restrict encryption and compression, which means anything you do will be publicly visible.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 10&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
* Offender Registration: Global list of morality registered for perusal of other networks.&lt;br /&gt;
* Encrypted logs on client-side&lt;br /&gt;
** Reporting with tangible evidence&lt;br /&gt;
* Compensation for crimes&lt;br /&gt;
* virus notification&lt;br /&gt;
* detrimental to attacker buying a new computer, rather than total prevention&lt;br /&gt;
** assumption that computer can always be identified&lt;br /&gt;
* virus as umbrella group for mobile code&lt;br /&gt;
** active attackers punished differently from passive attackers&lt;br /&gt;
* Research Topics:&lt;br /&gt;
** What is Justice?&lt;br /&gt;
*** Mike&lt;br /&gt;
** Justice in terms of computers.&lt;br /&gt;
*** Matthew&lt;br /&gt;
** Crime and Punishment.&lt;br /&gt;
*** David&lt;br /&gt;
** Justice Web&lt;br /&gt;
*** Thomas McMahon&lt;br /&gt;
&lt;br /&gt;
===Mar 17===&lt;br /&gt;
&lt;br /&gt;
==Research Documentation==&lt;br /&gt;
&lt;br /&gt;
===Virtual Punishment===&lt;br /&gt;
I am currently reading a part of this book for some details on virtual punishment and a bit of history that this guy wrote about, but not sure if there is much there yet. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=KacfpI0zYAUC&amp;amp;oi=fnd&amp;amp;pg=PA206&amp;amp;dq=punishing+computers&amp;amp;ots=YhI8lfMo1F&amp;amp;sig=c7MqOVjR-9QKjj5_ANi0yyxYiAA#v=onepage&amp;amp;q=punishing%20computers&amp;amp;f=false link] --[[User:Mchou2|Mchou2]] 03:29, 3 March 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.101.2723&amp;amp;rep=rep1&amp;amp;type=pdf Responsible Computers?]&lt;br /&gt;
&lt;br /&gt;
===Theory of Justice===&lt;br /&gt;
&lt;br /&gt;
Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* This book provides a view of Justice that may serve the purpose of distributed computing. Rawls describes justice as serving two primary functions;      &lt;br /&gt;
       1. Assign rights and duties for the basic institutions of society.&lt;br /&gt;
       2. Describe the best way to distribute the benefits and burdens of society.&lt;br /&gt;
*If we take this view of justice, as opposed to a penalty-centric view, then justice may have a place in distributed computing. For our purposes, justice could be the basic guidelines to which all members of a distributed society must conform in order for the system to be stable and efficient. Obviously this view is an &amp;quot;all-in&amp;quot; type approach and may be more difficult to describe in terms of being incrementally deployable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===The Birth of Prison===&lt;br /&gt;
Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* Foucault&#039;s book focuses on how punishment evolved from medevil methods &amp;quot;draw and quarter&amp;quot; to modern prison methods. These two methods of justiceare differentiated by the way in which punishment is carried out. For medevil, or &amp;quot;Monarchical Punishment&amp;quot;, the population is discouraged from doing bad acts by the public, and brutal, way that punishment is exacted. The punishments included torture and executions. On the other hand, Foucault discusses &amp;quot;Disciplinary Punishment&amp;quot; where there are people deemed as experts who have power over the perpetrator of a &amp;quot;bad&amp;quot; act and handle the punishment of the individual. An example of this is a prison guard who determines how long a prisoner stays in jail.&lt;br /&gt;
*For a distributed computing system, this provides a couple of ways that justice could be enforced. If we think of the general distributed system as a free zone in which computers can act how they wish but there are laws in place to describe &amp;quot;bad&amp;quot; acts. If a computer is caught and convicted of doing something against the described laws, then the computer could be tortured (forced to provide more resources to other computers), executed (completly removed from the system) or potentially placed under the care of a supervisor computer who will allow the &amp;quot;bad&amp;quot; computer to continue to participate in certain, restricted actions until the professional (supervisor) computer approves of releasing the &amp;quot;bad&amp;quot; computer back to the general system. The supervisor computer may actually be controlled by a human who is trying to resolve the issue on the offending computer.&lt;br /&gt;
* Another concept worth investigating is that of Foucault&#039;s &amp;quot;Panopticon&amp;quot; which is a prison in which everything can be seen. This can also be extended from the strictly prison sense to the level of daily interactions between people and the idea of shame. Most rules are followed because of the knowledge that those around you will see what you have done and their view of you will change, you will have a social stygma. If this is adopted by the computers, through some reputation mechanism, then maybe distributed computing relationships could be formed and altered based on the actions conducted by individual computers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Ecce Homo &amp;amp; The Anarchist===&lt;br /&gt;
Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* If we were going to add shame/stygma to computers, there would need to be some mechanism to manage what is good and what is bad. Nietzsche&#039;s work could provide a basis for this computer moral code as he describes two different forms of morality based on two different social position: &amp;quot;master-morality&amp;quot; and &amp;quot;slave-morality&amp;quot;.&lt;br /&gt;
** Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic.&lt;br /&gt;
** Slave-morality is split on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.&lt;br /&gt;
* Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
*If this morality was tied to the reputation component, then all computers would be able to know how other computers &amp;quot;socially&amp;quot; behave. This would further allow punishment methods, as described in the above Foucault section, to be handed out based on how &amp;quot;bad&amp;quot; a computer is and the affending computer can only be released when it&#039;s morality is deemed appropriate by the supervising (professional) computer.&lt;br /&gt;
&lt;br /&gt;
=== Crime and Punishment ===&lt;br /&gt;
This is just a little placeholder for some thoughts before I post them to the main page. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Limiting capabilities&#039;&#039;&#039;&lt;br /&gt;
Anil mentioned in class the possibility of revoking or limiting capabilities if a user/computer has been found to be guilty of a crime. For example, the computer could somehow lose its ability to perform encryption or secure communications. Somewhat related is the idea of cpu-throttling by performing additional work (explained in the section below). &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Proof-of-Work&#039;&#039;&#039;&lt;br /&gt;
There has been a lot of research done in the area of computational puzzles to fight spam. The idea is that there is currently very little cost associated with sending spam (much less than .01c per email), so we want to make it a bit more &amp;quot;expensive&amp;quot; for spammers to achieve their goal. One solution is to have any email-sending computer perform some type of computational puzzle every time an email is sent. The result of the computation is appended to the email and can be verified by the recipient. One example is to find a string that when hashed gives a result smaller or larger than a specific value. You can statistically predict how long such a computation would take, and you could tweak it to be some particular value (10s, 1m, etc). &lt;br /&gt;
&lt;br /&gt;
I see this as being related to justice, because each self-governing entity can set up these proof-of-work requirements and adjust the difficulty for &amp;quot;trusted&amp;quot; entities and &amp;quot;untrusted&amp;quot; ones. The difficulty can also be increased for entities that misbehave, resulting in a kind of punishment. These punished systems would have to do more computation (e.g., 10m, 1hr) before they&#039;re allowed to communicate with someone else. &lt;br /&gt;
&lt;br /&gt;
I have some ideas on how you could technically do this, which we can discuss in class. And now some links:&lt;br /&gt;
&lt;br /&gt;
[https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6 https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6]&lt;br /&gt;
[http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Unique identifiers&#039;&#039;&#039; &lt;br /&gt;
How are machines identified? Although this problem is related to attribution and there&#039;s another team working on it, we can make some basic assumptions that each machine is identifiable. This identifier should be able to survive a reformat, but buying a new machine would get you a new identifier. We might argue that this is fine, because all we&#039;re trying to do is raise the price an attacker has to pay to commit a crime (i.e., buy more machines).&lt;br /&gt;
&lt;br /&gt;
===Gathering Evidence===&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1&lt;br /&gt;
&lt;br /&gt;
Above is a paper that proposes using statistical data to differentiate between legitimate and illegitimate traffic during a DDoS attack. While the paper proposes the statistics to be used for blocking bad traffic, the same logic can be applied to gathering evidence against the attackers of a DDoS. It gets pretty heavy into the statistical analysis, so it&#039;d probably be better to read the paper than me attempting to explain it. &lt;br /&gt;
Basically, it&#039;s meant to detect a DDoS that is purposefully disguised as a traffic flood. This means that Justice can be properly served to malicious computers as opposed to too many computers wanting your resources.&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01219052&lt;br /&gt;
&lt;br /&gt;
Above is a paper that discusses the idea of computer forensics, and what is needed in order to gather and manage evidence. Although it is meant to be applied to a human level of judgement, computers may be capable of processing this evidence effectively. Logs maintained by routers and local devices may be used as evidence, provided that there be a way to encrypt the data in a way that preserves the original form. It also discusses the challenge of presenting computer related evidence to non-technical jurors, but this is not a concern for computer level management. All that is required for computer forensics to work is additional software being run on select computers to process and preserve any evidence gathered.&lt;br /&gt;
&lt;br /&gt;
===CFAA Computer Fraud and Abuse Act===&lt;br /&gt;
&lt;br /&gt;
In terms of justice, there has been an act that has specifications to what cyber crimes and of what caliber they should be categorized in. One fundamental idea that is followed is the mens rea, which is defined as the &amp;quot;mental state&amp;quot; of a crime. &amp;quot;The Model Penal Code (&amp;quot;MPC&amp;quot;) lists four levels of mens rea -- purposely, knowingly, recklessly, and negligently. The MPC categories range from the highest level, purposely, to the lowest level, negligently. These mens rea levels are further divided into high and low mens rea requirements. The high mens rea levels include acts criminals do intentionally and knowingly. The low mens rea levels include acts criminals do recklessly, negligently, and with strict liability. Criminals have a higher level of mens rea when their intent is more specific; therefore, they are more blameworthy. With these differing mens rea categories in mind, Congress drafted CFAA to address computer crimes occurring on the Internet.&amp;quot;[http://www.lawtechjournal.com/archives/blt/i3-hh.html]&lt;br /&gt;
&lt;br /&gt;
Reading into the decisions they have made to update the CFAA brings up topics of how users who &amp;quot;intentionally&amp;quot; do harm, as well as users unknowingly participating, or even attempting to help the system by hurting it and fixing it ([http://scholar.google.com/scholar_case?case=551386241451639668 Morris]).&lt;br /&gt;
&lt;br /&gt;
Another note is that how there are laws and rules being made for humans to be penalized for such negative cyber actions, but even before penalty, it is important to setup a secure enough system that will try to mitigate such negative actions that can take place.Just as how workers in a business must be educated on detecting malicious software and other vulnerabilities in order to further secure the system. By setting up stand alone protection on each system would prevent the need to punish certain acts since they would be impossible to occur. [http://www.witsa.org/papers/McConnell-cybercrime.pdf Law is only part of the answer]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Concept: Justice Web==&lt;br /&gt;
&lt;br /&gt;
The Justice Web is a possible implementation that uses the research we have done so far. It is essentially an incrementally-deployable network system that shares resources with users within the same Justice Web, based on a morality rating. Evidence is logged so that those within the Web may be held accountable, and those without may be recorded and watched for future misbehaviour.&lt;br /&gt;
&lt;br /&gt;
===What it is===&lt;br /&gt;
&lt;br /&gt;
The Justice Web is an implementation that treats a network as a distributed system. Resources are shared among the users, based on some measure of trust. As the web grows, more computers become linked to each other within the web, making it harder to manage trust given to each member of the Web. Also, the Web should provide some shared protection for those within the network against external attacks.&lt;br /&gt;
&lt;br /&gt;
Because of this, some sort of Justice System is needed to process evidence and sentence malicious computers. The Justice Web would need a computer or computers to act as the judge. After the judgement, the Justice Web would then need to enforce a penalty on the offender.&lt;br /&gt;
&lt;br /&gt;
===What it does===&lt;br /&gt;
&lt;br /&gt;
The Justice Web links multiple computers together to act as a distributed system. the amount of resources allotted to a member is dependent on their moral rating and trust. The most trusted computer would possibly be the leader of the Web, acting as the judge. To gather evidence a log is kept at each implementation node. This evidence is encrypted so that the user cannot tamper with it. &lt;br /&gt;
&lt;br /&gt;
The evidence is collected by the Justice Web, and handled by members with high level of trust. That is to say, the high computers within the system would essentially be able to define how much evidence is needed, as well as what punishment is to be handed out. The power high members are not absolute, but are capable of influencing a standard set of rules. Rulings are handled using common law, with a punishment handled in the same way as previous ones, unless explicitly changed by the high members. &lt;br /&gt;
&lt;br /&gt;
After the evidence is processed, and a ruling is made by the high members, the Justice Web must then enforce the punishment. For threats coming from outside the Web, each member of the Web is warned about the offender. Continued communication with the offender will be allowed, but if an infection does occur, the punishment for becoming infected would be more severe.&lt;br /&gt;
&lt;br /&gt;
As for offenders within the system, the morality rating attached to that member is affected, and the amount of trust is decreased. From a practical standpoint, the punishment would involve the restriction of resources accessible to the member, while increasing the workload of the member. The amount of trust increases over time, allowing the member to slowly gain more and more access to resources, but the morality rating would be kept the same so that it is made aware that the member has done wrong in the past.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.computer.org/portal/web/csdl/doi/10.1109/ICDCS.2006.78 Prevent DOS by preventing spoofing]&lt;br /&gt;
&lt;br /&gt;
Also looking at Ingress filtering is also another good method to prevent users on a network from spoofing ips for DOS attacks. [http://delivery.acm.org.proxy.library.carleton.ca/10.1145/350000/347560/p295-savage.pdf?key1=347560&amp;amp;key2=5289230031&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;ip=134.117.10.200&amp;amp;CFID=14033754&amp;amp;CFTOKEN=82498533 link]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Implementation Section Notes==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;All computers have a unique ID&#039;&#039;&#039; - &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;There exists some form of morality reputation that is public knowledge.&#039;&#039;&#039; - It should be clear to everyone how you can lose morality. There should also be a clear process to follow in case of disputes. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website administrators will limit site usage based on the morality/reputation rating.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Morality history.&#039;&#039;&#039; - The current morality value is useful for quick validation. More complicated scenarios (e.g., buying something from ebay) might merit a more detailed explanation of why a computer has a particular morality value. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Website administrators will determine what restrictions are imposed on a website based on morality ratings. Different sites may have different penalties based on their own rules.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Comment Spam===&lt;br /&gt;
&amp;lt;u&amp;gt;Evidence&amp;lt;/u&amp;gt;&lt;br /&gt;
*Need to detect the bots and the origin of the spam.&lt;br /&gt;
*should include:&lt;br /&gt;
**the comment itself.&lt;br /&gt;
**a link to the page the comment exists on.&lt;br /&gt;
**the unique ID of the perpetrator.&lt;br /&gt;
**justification of why it is spam.&lt;br /&gt;
&amp;lt;u&amp;gt;investigating computer&amp;lt;/u&amp;gt;&lt;br /&gt;
*has access to the transaction/communication data of the perpetrating computer.&lt;br /&gt;
*compares the reported message to other ommunications to detect if spam has occurred.&lt;br /&gt;
&amp;lt;u&amp;gt;Currently deployed solutions&amp;lt;/u&amp;gt;&lt;br /&gt;
*CAPTCHAS - try to detect if the comment submission came from a human or a bot. &lt;br /&gt;
*Filtering - scan for and block specific keywords (pharmaceutical terms, porn terms, etc)&lt;br /&gt;
*Rate limiting - only allow N comments in X time from the same source.&lt;br /&gt;
&lt;br /&gt;
===Denial of Service===&lt;br /&gt;
Use the unique ID to trace back all traffic from the DoS attack to originating machines.&lt;br /&gt;
*if a certain percent of the traffic originates from a single ID (say 60%), then a DoS has occurred.&lt;br /&gt;
*only the computer conducting the DoS is penalized.&lt;br /&gt;
&lt;br /&gt;
===Phishing===&lt;br /&gt;
Send the original website link as well as the phishing site link in the report so that the investigating computer can compare.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Justice&amp;diff=8739</id>
		<title>Talk:DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Justice&amp;diff=8739"/>
		<updated>2011-03-20T20:27:59Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Comment Spam */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Meetings==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 01&amp;lt;/u&amp;gt;===&lt;br /&gt;
Early discussions on how we would define justice:&lt;br /&gt;
* what are the components of justice?&lt;br /&gt;
* should justice involve preventative measure or should it be strictly reactive?&lt;br /&gt;
&lt;br /&gt;
How would evidence be collected and logged?&lt;br /&gt;
&lt;br /&gt;
Discussions on what &amp;quot;punishment&amp;quot; means when referring to computers:&lt;br /&gt;
* What can we do to punish or penalize computers?&lt;br /&gt;
* Does it make sense to punish computers?&lt;br /&gt;
&lt;br /&gt;
Discussions on how human penal systems work:&lt;br /&gt;
* do we want computer justice to be used to dissuade deviant behaviour or should it be used to punish those who have committed &amp;quot;bad&amp;quot; acts?&lt;br /&gt;
* should we implement a system that catches/punishes all bad acts or just punish reported acts?&lt;br /&gt;
* how will we classify deviant behaviour?&lt;br /&gt;
** by the act itself &lt;br /&gt;
** by the results of the act&lt;br /&gt;
&lt;br /&gt;
Discussed how there would need to be some sort of hierarchical justice system with figure heads who manage justice activities for their specific region:&lt;br /&gt;
* collective internet justice: &amp;lt;b&amp;gt;Justice Web&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;JLA&amp;lt;/b&amp;gt; (Justice Link Assessment)&lt;br /&gt;
* each region patrolled by a justice managing unit:&lt;br /&gt;
** Internet Batman (Gotham), Internet Superman (Metropolis), etc.&lt;br /&gt;
&lt;br /&gt;
Divided the task of finding research papers into 3 sections:&lt;br /&gt;
* current ways to &amp;quot;punish&amp;quot; computers (Matthew)&lt;br /&gt;
* ways to collect, log, categorize evidence of inappropriate behaviour (Thomas)&lt;br /&gt;
* human methods of justice, various penal systems in our current and historical societies (Mike)&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 03&amp;lt;/u&amp;gt;===&lt;br /&gt;
Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:&lt;br /&gt;
* punishing computers is difficult as computers do not care what task they are given, they just complete computations.&lt;br /&gt;
* punishing people is not really the focus we need as that is what human laws are for.&lt;br /&gt;
* if there is some way to punish a computer, does it make sense to punish computers that are being used for &amp;quot;bad&amp;quot; actions if the owner of the computer is unaware of this activity.&lt;br /&gt;
** does this punishment really have a greater effect on the owner of the computer than the computer itself?&lt;br /&gt;
&lt;br /&gt;
Our new focus is to try and narrow down if the concept of justice actually has a place in distributed computing:&lt;br /&gt;
* determine what purpose justice would serve...why would we have it?&lt;br /&gt;
** if we decide justice is a necessary concept, the focus will become what is a &amp;quot;fair&amp;quot; way to apply punishment for &amp;quot;bad&amp;quot; actions.&lt;br /&gt;
** if justice does not have a useful purpose then we must detail the reason that it is not beneficial.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 08&amp;lt;/u&amp;gt;===&lt;br /&gt;
*&#039;&#039;&#039;Definition of Justice&#039;&#039;&#039; - Can we separate the computer punishment from the user punishment?&lt;br /&gt;
*&#039;&#039;&#039;Transparency&#039;&#039;&#039; - keeping &amp;quot;rap sheets&amp;quot; on what systems are doing/have done. If you were wrongfully accused for participating in a malicious attack, this can be clarified&lt;br /&gt;
*&#039;&#039;&#039;Punishment&#039;&#039;&#039; - Computational puzzles for fighting unsolicited inbound traffic. &lt;br /&gt;
*&#039;&#039;&#039;Morality rating&#039;&#039;&#039; - Systems get a &amp;quot;moral rating&amp;quot; that can go up or down. Based on this rating, more or less trust can be given to that system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Capital punishment? &lt;br /&gt;
Financial sanction or imprisonment are our current way of punishment. they&#039;re expensive (maintain databases, keeping state, paying for prisons). &lt;br /&gt;
&lt;br /&gt;
bodily harm - limited time to perform. the fact that they&#039;ve been punished is visible. losing hands, losing eyes, people can see that. information propagates because the authorities make an example of someone. &lt;br /&gt;
&lt;br /&gt;
maybe the solution is to restrict protocols if you have a low morality rating. for e.g., you can restrict encryption and compression, which means anything you do will be publicly visible.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 10&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
* Offender Registration: Global list of morality registered for perusal of other networks.&lt;br /&gt;
* Encrypted logs on client-side&lt;br /&gt;
** Reporting with tangible evidence&lt;br /&gt;
* Compensation for crimes&lt;br /&gt;
* virus notification&lt;br /&gt;
* detrimental to attacker buying a new computer, rather than total prevention&lt;br /&gt;
** assumption that computer can always be identified&lt;br /&gt;
* virus as umbrella group for mobile code&lt;br /&gt;
** active attackers punished differently from passive attackers&lt;br /&gt;
* Research Topics:&lt;br /&gt;
** What is Justice?&lt;br /&gt;
*** Mike&lt;br /&gt;
** Justice in terms of computers.&lt;br /&gt;
*** Matthew&lt;br /&gt;
** Crime and Punishment.&lt;br /&gt;
*** David&lt;br /&gt;
** Justice Web&lt;br /&gt;
*** Thomas McMahon&lt;br /&gt;
&lt;br /&gt;
===Mar 17===&lt;br /&gt;
&lt;br /&gt;
==Research Documentation==&lt;br /&gt;
&lt;br /&gt;
===Virtual Punishment===&lt;br /&gt;
I am currently reading a part of this book for some details on virtual punishment and a bit of history that this guy wrote about, but not sure if there is much there yet. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=KacfpI0zYAUC&amp;amp;oi=fnd&amp;amp;pg=PA206&amp;amp;dq=punishing+computers&amp;amp;ots=YhI8lfMo1F&amp;amp;sig=c7MqOVjR-9QKjj5_ANi0yyxYiAA#v=onepage&amp;amp;q=punishing%20computers&amp;amp;f=false link] --[[User:Mchou2|Mchou2]] 03:29, 3 March 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.101.2723&amp;amp;rep=rep1&amp;amp;type=pdf Responsible Computers?]&lt;br /&gt;
&lt;br /&gt;
===Theory of Justice===&lt;br /&gt;
&lt;br /&gt;
Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* This book provides a view of Justice that may serve the purpose of distributed computing. Rawls describes justice as serving two primary functions;      &lt;br /&gt;
       1. Assign rights and duties for the basic institutions of society.&lt;br /&gt;
       2. Describe the best way to distribute the benefits and burdens of society.&lt;br /&gt;
*If we take this view of justice, as opposed to a penalty-centric view, then justice may have a place in distributed computing. For our purposes, justice could be the basic guidelines to which all members of a distributed society must conform in order for the system to be stable and efficient. Obviously this view is an &amp;quot;all-in&amp;quot; type approach and may be more difficult to describe in terms of being incrementally deployable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===The Birth of Prison===&lt;br /&gt;
Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* Foucault&#039;s book focuses on how punishment evolved from medevil methods &amp;quot;draw and quarter&amp;quot; to modern prison methods. These two methods of justiceare differentiated by the way in which punishment is carried out. For medevil, or &amp;quot;Monarchical Punishment&amp;quot;, the population is discouraged from doing bad acts by the public, and brutal, way that punishment is exacted. The punishments included torture and executions. On the other hand, Foucault discusses &amp;quot;Disciplinary Punishment&amp;quot; where there are people deemed as experts who have power over the perpetrator of a &amp;quot;bad&amp;quot; act and handle the punishment of the individual. An example of this is a prison guard who determines how long a prisoner stays in jail.&lt;br /&gt;
*For a distributed computing system, this provides a couple of ways that justice could be enforced. If we think of the general distributed system as a free zone in which computers can act how they wish but there are laws in place to describe &amp;quot;bad&amp;quot; acts. If a computer is caught and convicted of doing something against the described laws, then the computer could be tortured (forced to provide more resources to other computers), executed (completly removed from the system) or potentially placed under the care of a supervisor computer who will allow the &amp;quot;bad&amp;quot; computer to continue to participate in certain, restricted actions until the professional (supervisor) computer approves of releasing the &amp;quot;bad&amp;quot; computer back to the general system. The supervisor computer may actually be controlled by a human who is trying to resolve the issue on the offending computer.&lt;br /&gt;
* Another concept worth investigating is that of Foucault&#039;s &amp;quot;Panopticon&amp;quot; which is a prison in which everything can be seen. This can also be extended from the strictly prison sense to the level of daily interactions between people and the idea of shame. Most rules are followed because of the knowledge that those around you will see what you have done and their view of you will change, you will have a social stygma. If this is adopted by the computers, through some reputation mechanism, then maybe distributed computing relationships could be formed and altered based on the actions conducted by individual computers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Ecce Homo &amp;amp; The Anarchist===&lt;br /&gt;
Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* If we were going to add shame/stygma to computers, there would need to be some mechanism to manage what is good and what is bad. Nietzsche&#039;s work could provide a basis for this computer moral code as he describes two different forms of morality based on two different social position: &amp;quot;master-morality&amp;quot; and &amp;quot;slave-morality&amp;quot;.&lt;br /&gt;
** Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic.&lt;br /&gt;
** Slave-morality is split on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.&lt;br /&gt;
* Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
*If this morality was tied to the reputation component, then all computers would be able to know how other computers &amp;quot;socially&amp;quot; behave. This would further allow punishment methods, as described in the above Foucault section, to be handed out based on how &amp;quot;bad&amp;quot; a computer is and the affending computer can only be released when it&#039;s morality is deemed appropriate by the supervising (professional) computer.&lt;br /&gt;
&lt;br /&gt;
=== Crime and Punishment ===&lt;br /&gt;
This is just a little placeholder for some thoughts before I post them to the main page. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Limiting capabilities&#039;&#039;&#039;&lt;br /&gt;
Anil mentioned in class the possibility of revoking or limiting capabilities if a user/computer has been found to be guilty of a crime. For example, the computer could somehow lose its ability to perform encryption or secure communications. Somewhat related is the idea of cpu-throttling by performing additional work (explained in the section below). &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Proof-of-Work&#039;&#039;&#039;&lt;br /&gt;
There has been a lot of research done in the area of computational puzzles to fight spam. The idea is that there is currently very little cost associated with sending spam (much less than .01c per email), so we want to make it a bit more &amp;quot;expensive&amp;quot; for spammers to achieve their goal. One solution is to have any email-sending computer perform some type of computational puzzle every time an email is sent. The result of the computation is appended to the email and can be verified by the recipient. One example is to find a string that when hashed gives a result smaller or larger than a specific value. You can statistically predict how long such a computation would take, and you could tweak it to be some particular value (10s, 1m, etc). &lt;br /&gt;
&lt;br /&gt;
I see this as being related to justice, because each self-governing entity can set up these proof-of-work requirements and adjust the difficulty for &amp;quot;trusted&amp;quot; entities and &amp;quot;untrusted&amp;quot; ones. The difficulty can also be increased for entities that misbehave, resulting in a kind of punishment. These punished systems would have to do more computation (e.g., 10m, 1hr) before they&#039;re allowed to communicate with someone else. &lt;br /&gt;
&lt;br /&gt;
I have some ideas on how you could technically do this, which we can discuss in class. And now some links:&lt;br /&gt;
&lt;br /&gt;
[https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6 https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6]&lt;br /&gt;
[http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Unique identifiers&#039;&#039;&#039; &lt;br /&gt;
How are machines identified? Although this problem is related to attribution and there&#039;s another team working on it, we can make some basic assumptions that each machine is identifiable. This identifier should be able to survive a reformat, but buying a new machine would get you a new identifier. We might argue that this is fine, because all we&#039;re trying to do is raise the price an attacker has to pay to commit a crime (i.e., buy more machines).&lt;br /&gt;
&lt;br /&gt;
===Gathering Evidence===&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1&lt;br /&gt;
&lt;br /&gt;
Above is a paper that proposes using statistical data to differentiate between legitimate and illegitimate traffic during a DDoS attack. While the paper proposes the statistics to be used for blocking bad traffic, the same logic can be applied to gathering evidence against the attackers of a DDoS. It gets pretty heavy into the statistical analysis, so it&#039;d probably be better to read the paper than me attempting to explain it. &lt;br /&gt;
Basically, it&#039;s meant to detect a DDoS that is purposefully disguised as a traffic flood. This means that Justice can be properly served to malicious computers as opposed to too many computers wanting your resources.&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01219052&lt;br /&gt;
&lt;br /&gt;
Above is a paper that discusses the idea of computer forensics, and what is needed in order to gather and manage evidence. Although it is meant to be applied to a human level of judgement, computers may be capable of processing this evidence effectively. Logs maintained by routers and local devices may be used as evidence, provided that there be a way to encrypt the data in a way that preserves the original form. It also discusses the challenge of presenting computer related evidence to non-technical jurors, but this is not a concern for computer level management. All that is required for computer forensics to work is additional software being run on select computers to process and preserve any evidence gathered.&lt;br /&gt;
&lt;br /&gt;
===CFAA Computer Fraud and Abuse Act===&lt;br /&gt;
&lt;br /&gt;
In terms of justice, there has been an act that has specifications to what cyber crimes and of what caliber they should be categorized in. One fundamental idea that is followed is the mens rea, which is defined as the &amp;quot;mental state&amp;quot; of a crime. &amp;quot;The Model Penal Code (&amp;quot;MPC&amp;quot;) lists four levels of mens rea -- purposely, knowingly, recklessly, and negligently. The MPC categories range from the highest level, purposely, to the lowest level, negligently. These mens rea levels are further divided into high and low mens rea requirements. The high mens rea levels include acts criminals do intentionally and knowingly. The low mens rea levels include acts criminals do recklessly, negligently, and with strict liability. Criminals have a higher level of mens rea when their intent is more specific; therefore, they are more blameworthy. With these differing mens rea categories in mind, Congress drafted CFAA to address computer crimes occurring on the Internet.&amp;quot;[http://www.lawtechjournal.com/archives/blt/i3-hh.html]&lt;br /&gt;
&lt;br /&gt;
Reading into the decisions they have made to update the CFAA brings up topics of how users who &amp;quot;intentionally&amp;quot; do harm, as well as users unknowingly participating, or even attempting to help the system by hurting it and fixing it ([http://scholar.google.com/scholar_case?case=551386241451639668 Morris]).&lt;br /&gt;
&lt;br /&gt;
Another note is that how there are laws and rules being made for humans to be penalized for such negative cyber actions, but even before penalty, it is important to setup a secure enough system that will try to mitigate such negative actions that can take place.Just as how workers in a business must be educated on detecting malicious software and other vulnerabilities in order to further secure the system. By setting up stand alone protection on each system would prevent the need to punish certain acts since they would be impossible to occur. [http://www.witsa.org/papers/McConnell-cybercrime.pdf Law is only part of the answer]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Concept: Justice Web==&lt;br /&gt;
&lt;br /&gt;
The Justice Web is a possible implementation that uses the research we have done so far. It is essentially an incrementally-deployable network system that shares resources with users within the same Justice Web, based on a morality rating. Evidence is logged so that those within the Web may be held accountable, and those without may be recorded and watched for future misbehaviour.&lt;br /&gt;
&lt;br /&gt;
===What it is===&lt;br /&gt;
&lt;br /&gt;
The Justice Web is an implementation that treats a network as a distributed system. Resources are shared among the users, based on some measure of trust. As the web grows, more computers become linked to each other within the web, making it harder to manage trust given to each member of the Web. Also, the Web should provide some shared protection for those within the network against external attacks.&lt;br /&gt;
&lt;br /&gt;
Because of this, some sort of Justice System is needed to process evidence and sentence malicious computers. The Justice Web would need a computer or computers to act as the judge. After the judgement, the Justice Web would then need to enforce a penalty on the offender.&lt;br /&gt;
&lt;br /&gt;
===What it does===&lt;br /&gt;
&lt;br /&gt;
The Justice Web links multiple computers together to act as a distributed system. the amount of resources allotted to a member is dependent on their moral rating and trust. The most trusted computer would possibly be the leader of the Web, acting as the judge. To gather evidence a log is kept at each implementation node. This evidence is encrypted so that the user cannot tamper with it. &lt;br /&gt;
&lt;br /&gt;
The evidence is collected by the Justice Web, and handled by members with high level of trust. That is to say, the high computers within the system would essentially be able to define how much evidence is needed, as well as what punishment is to be handed out. The power high members are not absolute, but are capable of influencing a standard set of rules. Rulings are handled using common law, with a punishment handled in the same way as previous ones, unless explicitly changed by the high members. &lt;br /&gt;
&lt;br /&gt;
After the evidence is processed, and a ruling is made by the high members, the Justice Web must then enforce the punishment. For threats coming from outside the Web, each member of the Web is warned about the offender. Continued communication with the offender will be allowed, but if an infection does occur, the punishment for becoming infected would be more severe.&lt;br /&gt;
&lt;br /&gt;
As for offenders within the system, the morality rating attached to that member is affected, and the amount of trust is decreased. From a practical standpoint, the punishment would involve the restriction of resources accessible to the member, while increasing the workload of the member. The amount of trust increases over time, allowing the member to slowly gain more and more access to resources, but the morality rating would be kept the same so that it is made aware that the member has done wrong in the past.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.computer.org/portal/web/csdl/doi/10.1109/ICDCS.2006.78 Prevent DOS by preventing spoofing]&lt;br /&gt;
&lt;br /&gt;
Also looking at Ingress filtering is also another good method to prevent users on a network from spoofing ips for DOS attacks. [http://delivery.acm.org.proxy.library.carleton.ca/10.1145/350000/347560/p295-savage.pdf?key1=347560&amp;amp;key2=5289230031&amp;amp;coll=DL&amp;amp;dl=ACM&amp;amp;ip=134.117.10.200&amp;amp;CFID=14033754&amp;amp;CFTOKEN=82498533 link]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Implementation Section Notes==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;All computers have a unique ID.&lt;br /&gt;
&lt;br /&gt;
There exists some form of morality reputation that is public knowledge.&lt;br /&gt;
&lt;br /&gt;
Website administrators will limit site usage based on the morality/reputation rating.&lt;br /&gt;
&lt;br /&gt;
Laws should be posted so that everyone can see what actions will reduce their morality rating.&lt;br /&gt;
&lt;br /&gt;
Website administrators will determine what restrictions are imposed on a website based on morality ratings. Different sites may have different penalties based on their own rules.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Comment Spam===&lt;br /&gt;
&amp;lt;u&amp;gt;Evidence&amp;lt;/u&amp;gt;&lt;br /&gt;
*Need to detect the bots and the origin of the spam.&lt;br /&gt;
*should include:&lt;br /&gt;
**the comment itself.&lt;br /&gt;
**a link to the page the comment exists on.&lt;br /&gt;
**the unique ID of the perpetrator.&lt;br /&gt;
**justification of why it is spam.&lt;br /&gt;
&amp;lt;u&amp;gt;investigating computer&amp;lt;/u&amp;gt;&lt;br /&gt;
*has access to the transaction/communication data of the perpetrating computer.&lt;br /&gt;
*compares the reported message to other ommunications to detect if spam has occurred.&lt;br /&gt;
&amp;lt;u&amp;gt;Currently deployed solutions&amp;lt;/u&amp;gt;&lt;br /&gt;
*CAPTCHAS - try to detect if the comment submission came from a human or a bot. &lt;br /&gt;
*Filtering - scan for and block specific keywords (pharmaceutical terms, porn terms, etc)&lt;br /&gt;
*Rate limiting - only allow N comments in X time from the same source.&lt;br /&gt;
&lt;br /&gt;
===Denial of Service===&lt;br /&gt;
Use the unique ID to trace back all traffic from the DoS attack to originating machines.&lt;br /&gt;
*if a certain percent of the traffic originates from a single ID (say 60%), then a DoS has occurred.&lt;br /&gt;
*only the computer conducting the DoS is penalized.&lt;br /&gt;
&lt;br /&gt;
===Phishing===&lt;br /&gt;
Send the original website link as well as the phishing site link in the report so that the investigating computer can compare.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Justice&amp;diff=8497</id>
		<title>Talk:DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=Talk:DistOS-2011W_Justice&amp;diff=8497"/>
		<updated>2011-03-14T14:29:30Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Proof-of-Work */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Meetings==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 01&amp;lt;/u&amp;gt;===&lt;br /&gt;
Early discussions on how we would define justice:&lt;br /&gt;
* what are the components of justice?&lt;br /&gt;
* should justice involve preventative measure or should it be strictly reactive?&lt;br /&gt;
&lt;br /&gt;
How would evidence be collected and logged?&lt;br /&gt;
&lt;br /&gt;
Discussions on what &amp;quot;punishment&amp;quot; means when referring to computers:&lt;br /&gt;
* What can we do to punish or penalize computers?&lt;br /&gt;
* Does it make sense to punish computers?&lt;br /&gt;
&lt;br /&gt;
Discussions on how human penal systems work:&lt;br /&gt;
* do we want computer justice to be used to dissuade deviant behaviour or should it be used to punish those who have committed &amp;quot;bad&amp;quot; acts?&lt;br /&gt;
* should we implement a system that catches/punishes all bad acts or just punish reported acts?&lt;br /&gt;
* how will we classify deviant behaviour?&lt;br /&gt;
** by the act itself &lt;br /&gt;
** by the results of the act&lt;br /&gt;
&lt;br /&gt;
Discussed how there would need to be some sort of hierarchical justice system with figure heads who manage justice activities for their specific region:&lt;br /&gt;
* collective internet justice: &amp;lt;b&amp;gt;Justice Web&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;JLA&amp;lt;/b&amp;gt; (Justice Link Assessment)&lt;br /&gt;
* each region patrolled by a justice managing unit:&lt;br /&gt;
** Internet Batman (Gotham), Internet Superman (Metropolis), etc.&lt;br /&gt;
&lt;br /&gt;
Divided the task of finding research papers into 3 sections:&lt;br /&gt;
* current ways to &amp;quot;punish&amp;quot; computers (Matthew)&lt;br /&gt;
* ways to collect, log, categorize evidence of inappropriate behaviour (Thomas)&lt;br /&gt;
* human methods of justice, various penal systems in our current and historical societies (Mike)&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 03&amp;lt;/u&amp;gt;===&lt;br /&gt;
Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:&lt;br /&gt;
* punishing computers is difficult as computers do not care what task they are given, they just complete computations.&lt;br /&gt;
* punishing people is not really the focus we need as that is what human laws are for.&lt;br /&gt;
* if there is some way to punish a computer, does it make sense to punish computers that are being used for &amp;quot;bad&amp;quot; actions if the owner of the computer is unaware of this activity.&lt;br /&gt;
** does this punishment really have a greater effect on the owner of the computer than the computer itself?&lt;br /&gt;
&lt;br /&gt;
Our new focus is to try and narrow down if the concept of justice actually has a place in distributed computing:&lt;br /&gt;
* determine what purpose justice would serve...why would we have it?&lt;br /&gt;
** if we decide justice is a necessary concept, the focus will become what is a &amp;quot;fair&amp;quot; way to apply punishment for &amp;quot;bad&amp;quot; actions.&lt;br /&gt;
** if justice does not have a useful purpose then we must detail the reason that it is not beneficial.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 08&amp;lt;/u&amp;gt;===&lt;br /&gt;
*&#039;&#039;&#039;Definition of Justice&#039;&#039;&#039; - Can we separate the computer punishment from the user punishment?&lt;br /&gt;
*&#039;&#039;&#039;Transparency&#039;&#039;&#039; - keeping &amp;quot;rap sheets&amp;quot; on what systems are doing/have done. If you were wrongfully accused for participating in a malicious attack, this can be clarified&lt;br /&gt;
*&#039;&#039;&#039;Punishment&#039;&#039;&#039; - Computational puzzles for fighting unsolicited inbound traffic. &lt;br /&gt;
*&#039;&#039;&#039;Morality rating&#039;&#039;&#039; - Systems get a &amp;quot;moral rating&amp;quot; that can go up or down. Based on this rating, more or less trust can be given to that system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Capital punishment? &lt;br /&gt;
Financial sanction or imprisonment are our current way of punishment. they&#039;re expensive (maintain databases, keeping state, paying for prisons). &lt;br /&gt;
&lt;br /&gt;
bodily harm - limited time to perform. the fact that they&#039;ve been punished is visible. losing hands, losing eyes, people can see that. information propagates because the authorities make an example of someone. &lt;br /&gt;
&lt;br /&gt;
maybe the solution is to restrict protocols if you have a low morality rating. for e.g., you can restrict encryption and compression, which means anything you do will be publicly visible.&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 10&amp;lt;/u&amp;gt;===&lt;br /&gt;
&lt;br /&gt;
* Offender Registration: Global list of morality registered for perusal of other networks.&lt;br /&gt;
* Encrypted logs on client-side&lt;br /&gt;
** Reporting with tangible evidence&lt;br /&gt;
* Compensation for crimes&lt;br /&gt;
* virus notification&lt;br /&gt;
* detrimental to attacker buying a new computer, rather than total prevention&lt;br /&gt;
** assumption that computer can always be identified&lt;br /&gt;
* virus as umbrella group for mobile code&lt;br /&gt;
** active attackers punished differently from passive attackers&lt;br /&gt;
* Research Topics:&lt;br /&gt;
** What is Justice?&lt;br /&gt;
*** Mike&lt;br /&gt;
** Justice in terms of computers.&lt;br /&gt;
*** Matthew&lt;br /&gt;
** Crime and Punishment.&lt;br /&gt;
*** David&lt;br /&gt;
** Justice Web&lt;br /&gt;
*** Thomas McMahon&lt;br /&gt;
&lt;br /&gt;
==Research Documentation==&lt;br /&gt;
&lt;br /&gt;
===Virtual Punishment===&lt;br /&gt;
I am currently reading a part of this book for some details on virtual punishment and a bit of history that this guy wrote about, but not sure if there is much there yet. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=KacfpI0zYAUC&amp;amp;oi=fnd&amp;amp;pg=PA206&amp;amp;dq=punishing+computers&amp;amp;ots=YhI8lfMo1F&amp;amp;sig=c7MqOVjR-9QKjj5_ANi0yyxYiAA#v=onepage&amp;amp;q=punishing%20computers&amp;amp;f=false link] --[[User:Mchou2|Mchou2]] 03:29, 3 March 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.101.2723&amp;amp;rep=rep1&amp;amp;type=pdf Responsible Computers?]&lt;br /&gt;
&lt;br /&gt;
===Theory of Justice===&lt;br /&gt;
&lt;br /&gt;
Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* This book provides a view of Justice that may serve the purpose of distributed computing. Rawls describes justice as serving two primary functions;      &lt;br /&gt;
       1. Assign rights and duties for the basic institutions of society.&lt;br /&gt;
       2. Describe the best way to distribute the benefits and burdens of society.&lt;br /&gt;
*If we take this view of justice, as opposed to a penalty-centric view, then justice may have a place in distributed computing. For our purposes, justice could be the basic guidelines to which all members of a distributed society must conform in order for the system to be stable and efficient. Obviously this view is an &amp;quot;all-in&amp;quot; type approach and may be more difficult to describe in terms of being incrementally deployable.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===The Birth of Prison===&lt;br /&gt;
Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* Foucault&#039;s book focuses on how punishment evolved from medevil methods &amp;quot;draw and quarter&amp;quot; to modern prison methods. These two methods of justiceare differentiated by the way in which punishment is carried out. For medevil, or &amp;quot;Monarchical Punishment&amp;quot;, the population is discouraged from doing bad acts by the public, and brutal, way that punishment is exacted. The punishments included torture and executions. On the other hand, Foucault discusses &amp;quot;Disciplinary Punishment&amp;quot; where there are people deemed as experts who have power over the perpetrator of a &amp;quot;bad&amp;quot; act and handle the punishment of the individual. An example of this is a prison guard who determines how long a prisoner stays in jail.&lt;br /&gt;
*For a distributed computing system, this provides a couple of ways that justice could be enforced. If we think of the general distributed system as a free zone in which computers can act how they wish but there are laws in place to describe &amp;quot;bad&amp;quot; acts. If a computer is caught and convicted of doing something against the described laws, then the computer could be tortured (forced to provide more resources to other computers), executed (completly removed from the system) or potentially placed under the care of a supervisor computer who will allow the &amp;quot;bad&amp;quot; computer to continue to participate in certain, restricted actions until the professional (supervisor) computer approves of releasing the &amp;quot;bad&amp;quot; computer back to the general system. The supervisor computer may actually be controlled by a human who is trying to resolve the issue on the offending computer.&lt;br /&gt;
* Another concept worth investigating is that of Foucault&#039;s &amp;quot;Panopticon&amp;quot; which is a prison in which everything can be seen. This can also be extended from the strictly prison sense to the level of daily interactions between people and the idea of shame. Most rules are followed because of the knowledge that those around you will see what you have done and their view of you will change, you will have a social stygma. If this is adopted by the computers, through some reputation mechanism, then maybe distributed computing relationships could be formed and altered based on the actions conducted by individual computers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Ecce Homo &amp;amp; The Anarchist===&lt;br /&gt;
Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* If we were going to add shame/stygma to computers, there would need to be some mechanism to manage what is good and what is bad. Nietzsche&#039;s work could provide a basis for this computer moral code as he describes two different forms of morality based on two different social position: &amp;quot;master-morality&amp;quot; and &amp;quot;slave-morality&amp;quot;.&lt;br /&gt;
** Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic.&lt;br /&gt;
** Slave-morality is split on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.&lt;br /&gt;
* Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
*If this morality was tied to the reputation component, then all computers would be able to know how other computers &amp;quot;socially&amp;quot; behave. This would further allow punishment methods, as described in the above Foucault section, to be handed out based on how &amp;quot;bad&amp;quot; a computer is and the affending computer can only be released when it&#039;s morality is deemed appropriate by the supervising (professional) computer.&lt;br /&gt;
&lt;br /&gt;
=== Crime and Punishment ===&lt;br /&gt;
This is just a little placeholder for some thoughts before I post them to the main page. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Limiting capabilities&#039;&#039;&#039;&lt;br /&gt;
Anil mentioned in class the possibility of revoking or limiting capabilities if a user/computer has been found to be guilty of a crime. For example, the computer could somehow lose its ability to perform encryption or secure communications. Somewhat related is the idea of cpu-throttling by performing additional work (explained in the section below). &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Proof-of-Work&#039;&#039;&#039;&lt;br /&gt;
There has been a lot of research done in the area of computational puzzles to fight spam. The idea is that there is currently very little cost associated with sending spam (much less than .01c per email), so we want to make it a bit more &amp;quot;expensive&amp;quot; for spammers to achieve their goal. One solution is to have any email-sending computer perform some type of computational puzzle every time an email is sent. The result of the computation is appended to the email and can be verified by the recipient. One example is to find a string that when hashed gives a result smaller or larger than a specific value. You can statistically predict how long such a computation would take, and you could tweak it to be some particular value (10s, 1m, etc). &lt;br /&gt;
&lt;br /&gt;
I see this as being related to justice, because each self-governing entity can set up these proof-of-work requirements and adjust the difficulty for &amp;quot;trusted&amp;quot; entities and &amp;quot;untrusted&amp;quot; ones. The difficulty can also be increased for entities that misbehave, resulting in a kind of punishment. These punished systems would have to do more computation (e.g., 10m, 1hr) before they&#039;re allowed to communicate with someone else. &lt;br /&gt;
&lt;br /&gt;
I have some ideas on how you could technically do this, which we can discuss in class. And now some links:&lt;br /&gt;
&lt;br /&gt;
[https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6 https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6]&lt;br /&gt;
[http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Unique identifiers&#039;&#039;&#039; &lt;br /&gt;
How are machines identified? Although this problem is related to attribution and there&#039;s another team working on it, we can make some basic assumptions that each machine is identifiable. This identifier should be able to survive a reformat, but buying a new machine would get you a new identifier. We might argue that this is fine, because all we&#039;re trying to do is raise the price an attacker has to pay to commit a crime (i.e., buy more machines).&lt;br /&gt;
&lt;br /&gt;
===Gathering Evidence===&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1&lt;br /&gt;
&lt;br /&gt;
Above is a paper that proposes using statistical data to differentiate between legitimate and illegitimate traffic during a DDoS attack. While the paper proposes the statistics to be used for blocking bad traffic, the same logic can be applied to gathering evidence against the attackers of a DDoS. It gets pretty heavy into the statistical analysis, so it&#039;d probably be better to read the paper than me attempting to explain it. &lt;br /&gt;
Basically, it&#039;s meant to detect a DDoS that is purposefully disguised as a traffic flood. This means that Justice can be properly served to malicious computers as opposed to too many computers wanting your resources.&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01219052&lt;br /&gt;
&lt;br /&gt;
Above is a paper that discusses the idea of computer forensics, and what is needed in order to gather and manage evidence. Although it is meant to be applied to a human level of judgement, computers may be capable of processing this evidence effectively. Logs maintained by routers and local devices may be used as evidence, provided that there be a way to encrypt the data in a way that preserves the original form. It also discusses the challenge of presenting computer related evidence to non-technical jurors, but this is not a concern for computer level management. All that is required for computer forensics to work is additional software being run on select computers to process and preserve any evidence gathered.&lt;br /&gt;
&lt;br /&gt;
===CFAA Computer Fraud and Abuse Act===&lt;br /&gt;
&lt;br /&gt;
In terms of justice, there has been an act that has specifications to what cyber crimes and of what caliber they should be categorized in. One fundamental idea that is followed is the mens rea, which is defined as the &amp;quot;mental state&amp;quot; of a crime. &amp;quot;The Model Penal Code (&amp;quot;MPC&amp;quot;) lists four levels of mens rea -- purposely, knowingly, recklessly, and negligently. The MPC categories range from the highest level, purposely, to the lowest level, negligently. These mens rea levels are further divided into high and low mens rea requirements. The high mens rea levels include acts criminals do intentionally and knowingly. The low mens rea levels include acts criminals do recklessly, negligently, and with strict liability. Criminals have a higher level of mens rea when their intent is more specific; therefore, they are more blameworthy. With these differing mens rea categories in mind, Congress drafted CFAA to address computer crimes occurring on the Internet.&amp;quot;[http://www.lawtechjournal.com/archives/blt/i3-hh.html]&lt;br /&gt;
&lt;br /&gt;
Reading into the decisions they have made to update the CFAA brings up topics of how users who &amp;quot;intentionally&amp;quot; do harm, as well as users unknowingly participating, or even attempting to help the system by hurting it and fixing it ([http://scholar.google.com/scholar_case?case=551386241451639668 Morris]).&lt;br /&gt;
&lt;br /&gt;
Another note is that how there are laws and rules being made for humans to be penalized for such negative cyber actions, but even before penalty, it is important to setup a secure enough system that will try to mitigate such negative actions that can take place.Just as how workers in a business must be educated on detecting malicious software and other vulnerabilities in order to further secure the system. By setting up stand alone protection on each system would prevent the need to punish certain acts since they would be impossible to occur. [http://www.witsa.org/papers/McConnell-cybercrime.pdf Law is only part of the answer]&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8385</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8385"/>
		<updated>2011-03-10T18:00:24Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Members */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Members==&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera&lt;br /&gt;
&lt;br /&gt;
Note: research so far moved to Discussion section.&lt;br /&gt;
&lt;br /&gt;
==What is Justice?==&lt;br /&gt;
&lt;br /&gt;
==Justice Involving Computers==&lt;br /&gt;
&lt;br /&gt;
==Crime and Punishment==&lt;br /&gt;
&lt;br /&gt;
==Concept: Justice Web==&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8257</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8257"/>
		<updated>2011-03-08T19:01:27Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* March 8 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Members==&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera (late addition, sorry guys)&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 01&amp;lt;/u&amp;gt;===&lt;br /&gt;
Early discussions on how we would define justice:&lt;br /&gt;
* what are the components of justice?&lt;br /&gt;
* should justice involve preventative measure or should it be strictly reactive?&lt;br /&gt;
&lt;br /&gt;
How would evidence be collected and logged?&lt;br /&gt;
&lt;br /&gt;
Discussions on what &amp;quot;punishment&amp;quot; means when referring to computers:&lt;br /&gt;
* What can we do to punish or penalize computers?&lt;br /&gt;
* Does it make sense to punish computers?&lt;br /&gt;
&lt;br /&gt;
Discussions on how human penal systems work:&lt;br /&gt;
* do we want computer justice to be used to dissuade deviant behaviour or should it be used to punish those who have committed &amp;quot;bad&amp;quot; acts?&lt;br /&gt;
* should we implement a system that catches/punishes all bad acts or just punish reported acts?&lt;br /&gt;
* how will we classify deviant behaviour?&lt;br /&gt;
** by the act itself &lt;br /&gt;
** by the results of the act&lt;br /&gt;
&lt;br /&gt;
Discussed how there would need to be some sort of hierarchical justice system with figure heads who manage justice activities for their specific region:&lt;br /&gt;
* collective internet justice: &amp;lt;b&amp;gt;Justice Web&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;JLA&amp;lt;/b&amp;gt; (Justice Link Assessment)&lt;br /&gt;
* each region patrolled by a justice managing unit:&lt;br /&gt;
** Internet Batman (Gotham), Internet Superman (Metropolis), etc.&lt;br /&gt;
&lt;br /&gt;
Divided the task of finding research papers into 3 sections:&lt;br /&gt;
* current ways to &amp;quot;punish&amp;quot; computers (Matthew)&lt;br /&gt;
* ways to collect, log, categorize evidence of inappropriate behaviour (Thomas)&lt;br /&gt;
* human methods of justice, various penal systems in our current and historical societies (Mike)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 03&amp;lt;/u&amp;gt;===&lt;br /&gt;
Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:&lt;br /&gt;
* punishing computers is difficult as computers do not care what task they are given, they just complete computations.&lt;br /&gt;
* punishing people is not really the focus we need as that is what human laws are for.&lt;br /&gt;
* if there is some way to punish a computer, does it make sense to punish computers that are being used for &amp;quot;bad&amp;quot; actions if the owner of the computer is unaware of this activity.&lt;br /&gt;
** does this punishment really have a greater effect on the owner of the computer than the computer itself?&lt;br /&gt;
&lt;br /&gt;
Our new focus is to try and narrow down if the concept of justice actually has a place in distributed computing:&lt;br /&gt;
* determine what purpose justice would serve...why would we have it?&lt;br /&gt;
** if we decide justice is a necessary concept, the focus will become what is a &amp;quot;fair&amp;quot; way to apply punishment for &amp;quot;bad&amp;quot; actions.&lt;br /&gt;
** if justice does not have a useful purpose then we must detail the reason that it is not beneficial.&lt;br /&gt;
&lt;br /&gt;
==Resources==&lt;br /&gt;
&lt;br /&gt;
[1]Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* This book provides a view of Justice that may serve the purpose of distributed computing. Rawls describes justice as serving two primary functions;      &lt;br /&gt;
       1. Assign rights and duties for the basic institutions of society.&lt;br /&gt;
       2. Describe the best way to distribute the benefits and burdens of society.&lt;br /&gt;
*If we take this view of justice, as opposed to a penalty-centric view, then justice may have a place in distributed computing. For our purposes, justice could be the basic guidelines to which all members of a distributed society must conform in order for the system to be stable and efficient. Obviously this view is an &amp;quot;all-in&amp;quot; type approach and may be more difficult to describe in terms of being incrementally deployable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* Foucault&#039;s book focuses on how punishment evolved from medevil methods &amp;quot;draw and quarter&amp;quot; to modern prison methods. These two methods of justiceare differentiated by the way in which punishment is carried out. For medevil, or &amp;quot;Monarchical Punishment&amp;quot;, the population is discouraged from doing bad acts by the public, and brutal, way that punishment is exacted. The punishments included torture and executions. On the other hand, Foucault discusses &amp;quot;Disciplinary Punishment&amp;quot; where there are people deemed as experts who have power over the perpetrator of a &amp;quot;bad&amp;quot; act and handle the punishment of the individual. An example of this is a prison guard who determines how long a prisoner stays in jail.&lt;br /&gt;
*For a distributed computing system, this provides a couple of ways that justice could be enforced. If we think of the general distributed system as a free zone in which computers can act how they wish but there are laws in place to describe &amp;quot;bad&amp;quot; acts. If a computer is caught and convicted of doing something against the described laws, then the computer could be tortured (forced to provide more resources to other computers), executed (completly removed from the system) or potentially placed under the care of a supervisor computer who will allow the &amp;quot;bad&amp;quot; computer to continue to participate in certain, restricted actions until the professional (supervisor) computer approves of releasing the &amp;quot;bad&amp;quot; computer back to the general system. The supervisor computer may actually be controlled by a human who is trying to resolve the issue on the offending computer.&lt;br /&gt;
* Another concept worth investigating is that of Foucault&#039;s &amp;quot;Panopticon&amp;quot; which is a prison in which everything can be seen. This can also be extended from the strictly prison sense to the level of daily interactions between people and the idea of shame. Most rules are followed because of the knowledge that those around you will see what you have done and their view of you will change, you will have a social stygma. If this is adopted by the computers, through some reputation mechanism, then maybe distributed computing relationships could be formed and altered based on the actions conducted by individual computers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* If we were going to add shame/stygma to computers, there would need to be some mechanism to manage what is good and what is bad. Nietzsche&#039;s work could provide a basis for this computer moral code as he describes two different forms of morality based on two different social position: &amp;quot;master-morality&amp;quot; and &amp;quot;slave-morality&amp;quot;.&lt;br /&gt;
** Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic.&lt;br /&gt;
** Slave-morality is split on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.&lt;br /&gt;
* Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
*If this morality was tied to the reputation component, then all computers would be able to know how other computers &amp;quot;socially&amp;quot; behave. This would further allow punishment methods, as described in the above Foucault section, to be handed out based on how &amp;quot;bad&amp;quot; a computer is and the affending computer can only be released when it&#039;s morality is deemed appropriate by the supervising (professional) computer.&lt;br /&gt;
&lt;br /&gt;
=Proof-of-work=&lt;br /&gt;
There has been a lot of research done in the area of computational puzzles to fight spam. The idea is that there is currently very little cost associated with sending spam (much less than .01c per email), so we want to make it a bit more &amp;quot;expensive&amp;quot; for spammers to achieve their goal. One solution is to have any email-sending computer perform some type of computational puzzle every time an email is sent. The result of the computation is appended to the email and can be verified by the recipient. One example is to find a string that when hashed gives a result smaller or larger than a specific value. You can statistically predict how long such a computation would take, and you could tweak it to be some particular value (10s, 1m, etc). &lt;br /&gt;
&lt;br /&gt;
I see this as being related to justice, because each self-governing entity can set up these proof-of-work requirements and adjust the difficulty for &amp;quot;trusted&amp;quot; entities and &amp;quot;untrusted&amp;quot; ones. The difficulty can also be increased for entities that misbehave, resulting in a kind of punishment. These punished systems would have to do more computation (e.g., 10m, 1hr) before they&#039;re allowed to communicate with someone else. &lt;br /&gt;
&lt;br /&gt;
I have some ideas on how you could technically do this, which we can discuss in class. And now some links:&lt;br /&gt;
&lt;br /&gt;
[https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6 https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6]&lt;br /&gt;
[http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx]&lt;br /&gt;
&lt;br /&gt;
=Gathering Evidence=&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1&lt;br /&gt;
&lt;br /&gt;
Above is a paper that proposes using statistical data to differentiate between legitimate and illegitimate traffic during a DDoS attack. While the paper proposes the statistics to be used for blocking bad traffic, the same logic can be applied to gathering evidence against the attackers of a DDoS. It gets pretty heavy into the statistical analysis, so it&#039;d probably be better to read the paper than me attempting to explain it. Basically, it&#039;s meant to detect a DDoS that is purposefully disguised as a traffic flood. This means that Justice can be properly served to malicious computers as opposed to too many computers wanting your resources.&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01219052&lt;br /&gt;
&lt;br /&gt;
Above is a paper that discusses the idea of computer forensics, and what is needed in order to gather and manage evidence. Although it is meant to be applied to a human level of judgement, computers may be capable of processing this evidence effectively. Logs maintained by routers and local devices may be used as evidence, provided that there be a way to encrypt the data in a way that preserves the original form. It also discusses the challenge of presenting computer related evidence to non-technical jurors, but this is not a concern for computer level management. All that is required for computer forensics to work is additional software being run on select computers to process and preserve any evidence gathered.&lt;br /&gt;
&lt;br /&gt;
=March 8=&lt;br /&gt;
*&#039;&#039;&#039;Definition of Justice&#039;&#039;&#039; - Can we separate the computer punishment from the user punishment?&lt;br /&gt;
*&#039;&#039;&#039;Transparency&#039;&#039;&#039; - keeping &amp;quot;rap sheets&amp;quot; on what systems are doing/have done. If you were wrongfully accused for participating in a malicious attack, this can be clarified&lt;br /&gt;
*&#039;&#039;&#039;Punishment&#039;&#039;&#039; - Computational puzzles for fighting unsolicited inbound traffic. &lt;br /&gt;
*&#039;&#039;&#039;Morality rating&#039;&#039;&#039; - Systems get a &amp;quot;moral rating&amp;quot; that can go up or down. Based on this rating, more or less trust can be given to that system.&lt;br /&gt;
&lt;br /&gt;
Capital punishment? &lt;br /&gt;
Financial sanction or imprisonment are our current way of punishment. they&#039;re expensive (maintain databases, keeping state, paying for prisons). &lt;br /&gt;
&lt;br /&gt;
bodily harm - limited time to perform. the fact that they&#039;ve been punished is visible. losing hands, losing eyes, people can see that. information propagates because the authorities make an example of someone. &lt;br /&gt;
&lt;br /&gt;
maybe the solution is to restrict protocols if you have a low morality rating. for e.g., you can restrict encryption and compression, which means anything you do will be publicly visible.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8254</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8254"/>
		<updated>2011-03-08T18:44:47Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* March 8 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Members==&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera (late addition, sorry guys)&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 01&amp;lt;/u&amp;gt;===&lt;br /&gt;
Early discussions on how we would define justice:&lt;br /&gt;
* what are the components of justice?&lt;br /&gt;
* should justice involve preventative measure or should it be strictly reactive?&lt;br /&gt;
&lt;br /&gt;
How would evidence be collected and logged?&lt;br /&gt;
&lt;br /&gt;
Discussions on what &amp;quot;punishment&amp;quot; means when referring to computers:&lt;br /&gt;
* What can we do to punish or penalize computers?&lt;br /&gt;
* Does it make sense to punish computers?&lt;br /&gt;
&lt;br /&gt;
Discussions on how human penal systems work:&lt;br /&gt;
* do we want computer justice to be used to dissuade deviant behaviour or should it be used to punish those who have committed &amp;quot;bad&amp;quot; acts?&lt;br /&gt;
* should we implement a system that catches/punishes all bad acts or just punish reported acts?&lt;br /&gt;
* how will we classify deviant behaviour?&lt;br /&gt;
** by the act itself &lt;br /&gt;
** by the results of the act&lt;br /&gt;
&lt;br /&gt;
Discussed how there would need to be some sort of hierarchical justice system with figure heads who manage justice activities for their specific region:&lt;br /&gt;
* collective internet justice: &amp;lt;b&amp;gt;Justice Web&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;JLA&amp;lt;/b&amp;gt; (Justice Link Assessment)&lt;br /&gt;
* each region patrolled by a justice managing unit:&lt;br /&gt;
** Internet Batman (Gotham), Internet Superman (Metropolis), etc.&lt;br /&gt;
&lt;br /&gt;
Divided the task of finding research papers into 3 sections:&lt;br /&gt;
* current ways to &amp;quot;punish&amp;quot; computers (Matthew)&lt;br /&gt;
* ways to collect, log, categorize evidence of inappropriate behaviour (Thomas)&lt;br /&gt;
* human methods of justice, various penal systems in our current and historical societies (Mike)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 03&amp;lt;/u&amp;gt;===&lt;br /&gt;
Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:&lt;br /&gt;
* punishing computers is difficult as computers do not care what task they are given, they just complete computations.&lt;br /&gt;
* punishing people is not really the focus we need as that is what human laws are for.&lt;br /&gt;
* if there is some way to punish a computer, does it make sense to punish computers that are being used for &amp;quot;bad&amp;quot; actions if the owner of the computer is unaware of this activity.&lt;br /&gt;
** does this punishment really have a greater effect on the owner of the computer than the computer itself?&lt;br /&gt;
&lt;br /&gt;
Our new focus is to try and narrow down if the concept of justice actually has a place in distributed computing:&lt;br /&gt;
* determine what purpose justice would serve...why would we have it?&lt;br /&gt;
** if we decide justice is a necessary concept, the focus will become what is a &amp;quot;fair&amp;quot; way to apply punishment for &amp;quot;bad&amp;quot; actions.&lt;br /&gt;
** if justice does not have a useful purpose then we must detail the reason that it is not beneficial.&lt;br /&gt;
&lt;br /&gt;
==Resources==&lt;br /&gt;
&lt;br /&gt;
[1]Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* This book provides a view of Justice that may serve the purpose of distributed computing. Rawls describes justice as serving two primary functions;      &lt;br /&gt;
       1. Assign rights and duties for the basic institutions of society.&lt;br /&gt;
       2. Describe the best way to distribute the benefits and burdens of society.&lt;br /&gt;
*If we take this view of justice, as opposed to a penalty-centric view, then justice may have a place in distributed computing. For our purposes, justice could be the basic guidelines to which all members of a distributed society must conform in order for the system to be stable and efficient. Obviously this view is an &amp;quot;all-in&amp;quot; type approach and may be more difficult to describe in terms of being incrementally deployable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* Foucault&#039;s book focuses on how punishment evolved from medevil methods &amp;quot;draw and quarter&amp;quot; to modern prison methods. These two methods of justiceare differentiated by the way in which punishment is carried out. For medevil, or &amp;quot;Monarchical Punishment&amp;quot;, the population is discouraged from doing bad acts by the public, and brutal, way that punishment is exacted. The punishments included torture and executions. On the other hand, Foucault discusses &amp;quot;Disciplinary Punishment&amp;quot; where there are people deemed as experts who have power over the perpetrator of a &amp;quot;bad&amp;quot; act and handle the punishment of the individual. An example of this is a prison guard who determines how long a prisoner stays in jail.&lt;br /&gt;
*For a distributed computing system, this provides a couple of ways that justice could be enforced. If we think of the general distributed system as a free zone in which computers can act how they wish but there are laws in place to describe &amp;quot;bad&amp;quot; acts. If a computer is caught and convicted of doing something against the described laws, then the computer could be tortured (forced to provide more resources to other computers), executed (completly removed from the system) or potentially placed under the care of a supervisor computer who will allow the &amp;quot;bad&amp;quot; computer to continue to participate in certain, restricted actions until the professional (supervisor) computer approves of releasing the &amp;quot;bad&amp;quot; computer back to the general system. The supervisor computer may actually be controlled by a human who is trying to resolve the issue on the offending computer.&lt;br /&gt;
* Another concept worth investigating is that of Foucault&#039;s &amp;quot;Panopticon&amp;quot; which is a prison in which everything can be seen. This can also be extended from the strictly prison sense to the level of daily interactions between people and the idea of shame. Most rules are followed because of the knowledge that those around you will see what you have done and their view of you will change, you will have a social stygma. If this is adopted by the computers, through some reputation mechanism, then maybe distributed computing relationships could be formed and altered based on the actions conducted by individual computers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* If we were going to add shame/stygma to computers, there would need to be some mechanism to manage what is good and what is bad. Nietzsche&#039;s work could provide a basis for this computer moral code as he describes two different forms of morality based on two different social position: &amp;quot;master-morality&amp;quot; and &amp;quot;slave-morality&amp;quot;.&lt;br /&gt;
** Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic.&lt;br /&gt;
** Slave-morality is split on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.&lt;br /&gt;
* Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
*If this morality was tied to the reputation component, then all computers would be able to know how other computers &amp;quot;socially&amp;quot; behave. This would further allow punishment methods, as described in the above Foucault section, to be handed out based on how &amp;quot;bad&amp;quot; a computer is and the affending computer can only be released when it&#039;s morality is deemed appropriate by the supervising (professional) computer.&lt;br /&gt;
&lt;br /&gt;
=Proof-of-work=&lt;br /&gt;
There has been a lot of research done in the area of computational puzzles to fight spam. The idea is that there is currently very little cost associated with sending spam (much less than .01c per email), so we want to make it a bit more &amp;quot;expensive&amp;quot; for spammers to achieve their goal. One solution is to have any email-sending computer perform some type of computational puzzle every time an email is sent. The result of the computation is appended to the email and can be verified by the recipient. One example is to find a string that when hashed gives a result smaller or larger than a specific value. You can statistically predict how long such a computation would take, and you could tweak it to be some particular value (10s, 1m, etc). &lt;br /&gt;
&lt;br /&gt;
I see this as being related to justice, because each self-governing entity can set up these proof-of-work requirements and adjust the difficulty for &amp;quot;trusted&amp;quot; entities and &amp;quot;untrusted&amp;quot; ones. The difficulty can also be increased for entities that misbehave, resulting in a kind of punishment. These punished systems would have to do more computation (e.g., 10m, 1hr) before they&#039;re allowed to communicate with someone else. &lt;br /&gt;
&lt;br /&gt;
I have some ideas on how you could technically do this, which we can discuss in class. And now some links:&lt;br /&gt;
&lt;br /&gt;
[https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6 https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6]&lt;br /&gt;
[http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx]&lt;br /&gt;
&lt;br /&gt;
=Gathering Evidence=&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1&lt;br /&gt;
&lt;br /&gt;
Above is a paper that proposes using statistical data to differentiate between legitimate and illegitimate traffic during a DDoS attack. While the paper proposes the statistics to be used for blocking bad traffic, the same logic can be applied to gathering evidence against the attackers of a DDoS. It gets pretty heavy into the statistical analysis, so it&#039;d probably be better to read the paper than me attempting to explain it. Basically, it&#039;s meant to detect a DDoS that is purposefully disguised as a traffic flood. This means that Justice can be properly served to malicious computers as opposed to too many computers wanting your resources.&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01219052&lt;br /&gt;
&lt;br /&gt;
Above is a paper that discusses the idea of computer forensics, and what is needed in order to gather and manage evidence. Although it is meant to be applied to a human level of judgement, computers may be capable of processing this evidence effectively. Logs maintained by routers and local devices may be used as evidence, provided that there be a way to encrypt the data in a way that preserves the original form. It also discusses the challenge of presenting computer related evidence to non-technical jurors, but this is not a concern for computer level management. All that is required for computer forensics to work is additional software being run on select computers to process and preserve any evidence gathered.&lt;br /&gt;
&lt;br /&gt;
=March 8=&lt;br /&gt;
*&#039;&#039;&#039;Definition of Justice&#039;&#039;&#039;&lt;br /&gt;
*&#039;&#039;&#039;Transparency&#039;&#039;&#039; - keeping &amp;quot;rap sheets&amp;quot; on what systems are doing/have done. If you were wrongfully accused for participating in a malicious attack, this can be clarified&lt;br /&gt;
*&#039;&#039;&#039;Punishment&#039;&#039;&#039; - Computational puzzles for fighting unsolicited inbound traffic. &lt;br /&gt;
*&#039;&#039;&#039;Morality rating&#039;&#039;&#039; - Systems get a &amp;quot;moral rating&amp;quot; that can go up or down. Based on this rating, more or less trust can be given to that system.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8252</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8252"/>
		<updated>2011-03-08T18:38:12Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* March 8 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Members==&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera (late addition, sorry guys)&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 01&amp;lt;/u&amp;gt;===&lt;br /&gt;
Early discussions on how we would define justice:&lt;br /&gt;
* what are the components of justice?&lt;br /&gt;
* should justice involve preventative measure or should it be strictly reactive?&lt;br /&gt;
&lt;br /&gt;
How would evidence be collected and logged?&lt;br /&gt;
&lt;br /&gt;
Discussions on what &amp;quot;punishment&amp;quot; means when referring to computers:&lt;br /&gt;
* What can we do to punish or penalize computers?&lt;br /&gt;
* Does it make sense to punish computers?&lt;br /&gt;
&lt;br /&gt;
Discussions on how human penal systems work:&lt;br /&gt;
* do we want computer justice to be used to dissuade deviant behaviour or should it be used to punish those who have committed &amp;quot;bad&amp;quot; acts?&lt;br /&gt;
* should we implement a system that catches/punishes all bad acts or just punish reported acts?&lt;br /&gt;
* how will we classify deviant behaviour?&lt;br /&gt;
** by the act itself &lt;br /&gt;
** by the results of the act&lt;br /&gt;
&lt;br /&gt;
Discussed how there would need to be some sort of hierarchical justice system with figure heads who manage justice activities for their specific region:&lt;br /&gt;
* collective internet justice: &amp;lt;b&amp;gt;Justice Web&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;JLA&amp;lt;/b&amp;gt; (Justice Link Assessment)&lt;br /&gt;
* each region patrolled by a justice managing unit:&lt;br /&gt;
** Internet Batman (Gotham), Internet Superman (Metropolis), etc.&lt;br /&gt;
&lt;br /&gt;
Divided the task of finding research papers into 3 sections:&lt;br /&gt;
* current ways to &amp;quot;punish&amp;quot; computers (Matthew)&lt;br /&gt;
* ways to collect, log, categorize evidence of inappropriate behaviour (Thomas)&lt;br /&gt;
* human methods of justice, various penal systems in our current and historical societies (Mike)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 03&amp;lt;/u&amp;gt;===&lt;br /&gt;
Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:&lt;br /&gt;
* punishing computers is difficult as computers do not care what task they are given, they just complete computations.&lt;br /&gt;
* punishing people is not really the focus we need as that is what human laws are for.&lt;br /&gt;
* if there is some way to punish a computer, does it make sense to punish computers that are being used for &amp;quot;bad&amp;quot; actions if the owner of the computer is unaware of this activity.&lt;br /&gt;
** does this punishment really have a greater effect on the owner of the computer than the computer itself?&lt;br /&gt;
&lt;br /&gt;
Our new focus is to try and narrow down if the concept of justice actually has a place in distributed computing:&lt;br /&gt;
* determine what purpose justice would serve...why would we have it?&lt;br /&gt;
** if we decide justice is a necessary concept, the focus will become what is a &amp;quot;fair&amp;quot; way to apply punishment for &amp;quot;bad&amp;quot; actions.&lt;br /&gt;
** if justice does not have a useful purpose then we must detail the reason that it is not beneficial.&lt;br /&gt;
&lt;br /&gt;
==Resources==&lt;br /&gt;
&lt;br /&gt;
[1]Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* This book provides a view of Justice that may serve the purpose of distributed computing. Rawls describes justice as serving two primary functions;      &lt;br /&gt;
       1. Assign rights and duties for the basic institutions of society.&lt;br /&gt;
       2. Describe the best way to distribute the benefits and burdens of society.&lt;br /&gt;
*If we take this view of justice, as opposed to a penalty-centric view, then justice may have a place in distributed computing. For our purposes, justice could be the basic guidelines to which all members of a distributed society must conform in order for the system to be stable and efficient. Obviously this view is an &amp;quot;all-in&amp;quot; type approach and may be more difficult to describe in terms of being incrementally deployable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* Foucault&#039;s book focuses on how punishment evolved from medevil methods &amp;quot;draw and quarter&amp;quot; to modern prison methods. These two methods of justiceare differentiated by the way in which punishment is carried out. For medevil, or &amp;quot;Monarchical Punishment&amp;quot;, the population is discouraged from doing bad acts by the public, and brutal, way that punishment is exacted. The punishments included torture and executions. On the other hand, Foucault discusses &amp;quot;Disciplinary Punishment&amp;quot; where there are people deemed as experts who have power over the perpetrator of a &amp;quot;bad&amp;quot; act and handle the punishment of the individual. An example of this is a prison guard who determines how long a prisoner stays in jail.&lt;br /&gt;
*For a distributed computing system, this provides a couple of ways that justice could be enforced. If we think of the general distributed system as a free zone in which computers can act how they wish but there are laws in place to describe &amp;quot;bad&amp;quot; acts. If a computer is caught and convicted of doing something against the described laws, then the computer could be tortured (forced to provide more resources to other computers), executed (completly removed from the system) or potentially placed under the care of a supervisor computer who will allow the &amp;quot;bad&amp;quot; computer to continue to participate in certain, restricted actions until the professional (supervisor) computer approves of releasing the &amp;quot;bad&amp;quot; computer back to the general system. The supervisor computer may actually be controlled by a human who is trying to resolve the issue on the offending computer.&lt;br /&gt;
* Another concept worth investigating is that of Foucault&#039;s &amp;quot;Panopticon&amp;quot; which is a prison in which everything can be seen. This can also be extended from the strictly prison sense to the level of daily interactions between people and the idea of shame. Most rules are followed because of the knowledge that those around you will see what you have done and their view of you will change, you will have a social stygma. If this is adopted by the computers, through some reputation mechanism, then maybe distributed computing relationships could be formed and altered based on the actions conducted by individual computers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* If we were going to add shame/stygma to computers, there would need to be some mechanism to manage what is good and what is bad. Nietzsche&#039;s work could provide a basis for this computer moral code as he describes two different forms of morality based on two different social position: &amp;quot;master-morality&amp;quot; and &amp;quot;slave-morality&amp;quot;.&lt;br /&gt;
** Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic.&lt;br /&gt;
** Slave-morality is split on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.&lt;br /&gt;
* Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
*If this morality was tied to the reputation component, then all computers would be able to know how other computers &amp;quot;socially&amp;quot; behave. This would further allow punishment methods, as described in the above Foucault section, to be handed out based on how &amp;quot;bad&amp;quot; a computer is and the affending computer can only be released when it&#039;s morality is deemed appropriate by the supervising (professional) computer.&lt;br /&gt;
&lt;br /&gt;
=Proof-of-work=&lt;br /&gt;
There has been a lot of research done in the area of computational puzzles to fight spam. The idea is that there is currently very little cost associated with sending spam (much less than .01c per email), so we want to make it a bit more &amp;quot;expensive&amp;quot; for spammers to achieve their goal. One solution is to have any email-sending computer perform some type of computational puzzle every time an email is sent. The result of the computation is appended to the email and can be verified by the recipient. One example is to find a string that when hashed gives a result smaller or larger than a specific value. You can statistically predict how long such a computation would take, and you could tweak it to be some particular value (10s, 1m, etc). &lt;br /&gt;
&lt;br /&gt;
I see this as being related to justice, because each self-governing entity can set up these proof-of-work requirements and adjust the difficulty for &amp;quot;trusted&amp;quot; entities and &amp;quot;untrusted&amp;quot; ones. The difficulty can also be increased for entities that misbehave, resulting in a kind of punishment. These punished systems would have to do more computation (e.g., 10m, 1hr) before they&#039;re allowed to communicate with someone else. &lt;br /&gt;
&lt;br /&gt;
I have some ideas on how you could technically do this, which we can discuss in class. And now some links:&lt;br /&gt;
&lt;br /&gt;
[https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6 https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6]&lt;br /&gt;
[http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx]&lt;br /&gt;
&lt;br /&gt;
=Gathering Evidence=&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1&lt;br /&gt;
&lt;br /&gt;
Above is a paper that proposes using statistical data to differentiate between legitimate and illegitimate traffic during a DDoS attack. While the paper proposes the statistics to be used for blocking bad traffic, the same logic can be applied to gathering evidence against the attackers of a DDoS. It gets pretty heavy into the statistical analysis, so it&#039;d probably be better to read the paper than me attempting to explain it. Basically, it&#039;s meant to detect a DDoS that is purposefully disguised as a traffic flood. This means that Justice can be properly served to malicious computers as opposed to too many computers wanting your resources.&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01219052&lt;br /&gt;
&lt;br /&gt;
Above is a paper that discusses the idea of computer forensics, and what is needed in order to gather and manage evidence. Although it is meant to be applied to a human level of judgement, computers may be capable of processing this evidence effectively. Logs maintained by routers and local devices may be used as evidence, provided that there be a way to encrypt the data in a way that preserves the original form. It also discusses the challenge of presenting computer related evidence to non-technical jurors, but this is not a concern for computer level management. All that is required for computer forensics to work is additional software being run on select computers to process and preserve any evidence gathered.&lt;br /&gt;
&lt;br /&gt;
=March 8=&lt;br /&gt;
*&#039;&#039;&#039;Transparency&#039;&#039;&#039; - keeping &amp;quot;rap sheets&amp;quot; on what systems are doing/have done. If you were wrongfully accused for participating in a malicious attack, this can be clarified&lt;br /&gt;
*&#039;&#039;&#039;Punishment&#039;&#039;&#039; - Computational puzzles for fighting unsolicited inbound traffic&lt;br /&gt;
*&#039;&#039;&#039;Morality rating&#039;&#039;&#039; - Systems get a &amp;quot;moral rating&amp;quot; that can go up or down. Based on this rating, more or less trust can be given to that system.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8251</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8251"/>
		<updated>2011-03-08T18:37:04Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* March 8 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Members==&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera (late addition, sorry guys)&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 01&amp;lt;/u&amp;gt;===&lt;br /&gt;
Early discussions on how we would define justice:&lt;br /&gt;
* what are the components of justice?&lt;br /&gt;
* should justice involve preventative measure or should it be strictly reactive?&lt;br /&gt;
&lt;br /&gt;
How would evidence be collected and logged?&lt;br /&gt;
&lt;br /&gt;
Discussions on what &amp;quot;punishment&amp;quot; means when referring to computers:&lt;br /&gt;
* What can we do to punish or penalize computers?&lt;br /&gt;
* Does it make sense to punish computers?&lt;br /&gt;
&lt;br /&gt;
Discussions on how human penal systems work:&lt;br /&gt;
* do we want computer justice to be used to dissuade deviant behaviour or should it be used to punish those who have committed &amp;quot;bad&amp;quot; acts?&lt;br /&gt;
* should we implement a system that catches/punishes all bad acts or just punish reported acts?&lt;br /&gt;
* how will we classify deviant behaviour?&lt;br /&gt;
** by the act itself &lt;br /&gt;
** by the results of the act&lt;br /&gt;
&lt;br /&gt;
Discussed how there would need to be some sort of hierarchical justice system with figure heads who manage justice activities for their specific region:&lt;br /&gt;
* collective internet justice: &amp;lt;b&amp;gt;Justice Web&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;JLA&amp;lt;/b&amp;gt; (Justice Link Assessment)&lt;br /&gt;
* each region patrolled by a justice managing unit:&lt;br /&gt;
** Internet Batman (Gotham), Internet Superman (Metropolis), etc.&lt;br /&gt;
&lt;br /&gt;
Divided the task of finding research papers into 3 sections:&lt;br /&gt;
* current ways to &amp;quot;punish&amp;quot; computers (Matthew)&lt;br /&gt;
* ways to collect, log, categorize evidence of inappropriate behaviour (Thomas)&lt;br /&gt;
* human methods of justice, various penal systems in our current and historical societies (Mike)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 03&amp;lt;/u&amp;gt;===&lt;br /&gt;
Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:&lt;br /&gt;
* punishing computers is difficult as computers do not care what task they are given, they just complete computations.&lt;br /&gt;
* punishing people is not really the focus we need as that is what human laws are for.&lt;br /&gt;
* if there is some way to punish a computer, does it make sense to punish computers that are being used for &amp;quot;bad&amp;quot; actions if the owner of the computer is unaware of this activity.&lt;br /&gt;
** does this punishment really have a greater effect on the owner of the computer than the computer itself?&lt;br /&gt;
&lt;br /&gt;
Our new focus is to try and narrow down if the concept of justice actually has a place in distributed computing:&lt;br /&gt;
* determine what purpose justice would serve...why would we have it?&lt;br /&gt;
** if we decide justice is a necessary concept, the focus will become what is a &amp;quot;fair&amp;quot; way to apply punishment for &amp;quot;bad&amp;quot; actions.&lt;br /&gt;
** if justice does not have a useful purpose then we must detail the reason that it is not beneficial.&lt;br /&gt;
&lt;br /&gt;
==Resources==&lt;br /&gt;
&lt;br /&gt;
[1]Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* This book provides a view of Justice that may serve the purpose of distributed computing. Rawls describes justice as serving two primary functions;      &lt;br /&gt;
       1. Assign rights and duties for the basic institutions of society.&lt;br /&gt;
       2. Describe the best way to distribute the benefits and burdens of society.&lt;br /&gt;
*If we take this view of justice, as opposed to a penalty-centric view, then justice may have a place in distributed computing. For our purposes, justice could be the basic guidelines to which all members of a distributed society must conform in order for the system to be stable and efficient. Obviously this view is an &amp;quot;all-in&amp;quot; type approach and may be more difficult to describe in terms of being incrementally deployable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* Foucault&#039;s book focuses on how punishment evolved from medevil methods &amp;quot;draw and quarter&amp;quot; to modern prison methods. These two methods of justiceare differentiated by the way in which punishment is carried out. For medevil, or &amp;quot;Monarchical Punishment&amp;quot;, the population is discouraged from doing bad acts by the public, and brutal, way that punishment is exacted. The punishments included torture and executions. On the other hand, Foucault discusses &amp;quot;Disciplinary Punishment&amp;quot; where there are people deemed as experts who have power over the perpetrator of a &amp;quot;bad&amp;quot; act and handle the punishment of the individual. An example of this is a prison guard who determines how long a prisoner stays in jail.&lt;br /&gt;
*For a distributed computing system, this provides a couple of ways that justice could be enforced. If we think of the general distributed system as a free zone in which computers can act how they wish but there are laws in place to describe &amp;quot;bad&amp;quot; acts. If a computer is caught and convicted of doing something against the described laws, then the computer could be tortured (forced to provide more resources to other computers), executed (completly removed from the system) or potentially placed under the care of a supervisor computer who will allow the &amp;quot;bad&amp;quot; computer to continue to participate in certain, restricted actions until the professional (supervisor) computer approves of releasing the &amp;quot;bad&amp;quot; computer back to the general system. The supervisor computer may actually be controlled by a human who is trying to resolve the issue on the offending computer.&lt;br /&gt;
* Another concept worth investigating is that of Foucault&#039;s &amp;quot;Panopticon&amp;quot; which is a prison in which everything can be seen. This can also be extended from the strictly prison sense to the level of daily interactions between people and the idea of shame. Most rules are followed because of the knowledge that those around you will see what you have done and their view of you will change, you will have a social stygma. If this is adopted by the computers, through some reputation mechanism, then maybe distributed computing relationships could be formed and altered based on the actions conducted by individual computers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* If we were going to add shame/stygma to computers, there would need to be some mechanism to manage what is good and what is bad. Nietzsche&#039;s work could provide a basis for this computer moral code as he describes two different forms of morality based on two different social position: &amp;quot;master-morality&amp;quot; and &amp;quot;slave-morality&amp;quot;.&lt;br /&gt;
** Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic.&lt;br /&gt;
** Slave-morality is split on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.&lt;br /&gt;
* Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
*If this morality was tied to the reputation component, then all computers would be able to know how other computers &amp;quot;socially&amp;quot; behave. This would further allow punishment methods, as described in the above Foucault section, to be handed out based on how &amp;quot;bad&amp;quot; a computer is and the affending computer can only be released when it&#039;s morality is deemed appropriate by the supervising (professional) computer.&lt;br /&gt;
&lt;br /&gt;
=Proof-of-work=&lt;br /&gt;
There has been a lot of research done in the area of computational puzzles to fight spam. The idea is that there is currently very little cost associated with sending spam (much less than .01c per email), so we want to make it a bit more &amp;quot;expensive&amp;quot; for spammers to achieve their goal. One solution is to have any email-sending computer perform some type of computational puzzle every time an email is sent. The result of the computation is appended to the email and can be verified by the recipient. One example is to find a string that when hashed gives a result smaller or larger than a specific value. You can statistically predict how long such a computation would take, and you could tweak it to be some particular value (10s, 1m, etc). &lt;br /&gt;
&lt;br /&gt;
I see this as being related to justice, because each self-governing entity can set up these proof-of-work requirements and adjust the difficulty for &amp;quot;trusted&amp;quot; entities and &amp;quot;untrusted&amp;quot; ones. The difficulty can also be increased for entities that misbehave, resulting in a kind of punishment. These punished systems would have to do more computation (e.g., 10m, 1hr) before they&#039;re allowed to communicate with someone else. &lt;br /&gt;
&lt;br /&gt;
I have some ideas on how you could technically do this, which we can discuss in class. And now some links:&lt;br /&gt;
&lt;br /&gt;
[https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6 https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6]&lt;br /&gt;
[http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx]&lt;br /&gt;
&lt;br /&gt;
=Gathering Evidence=&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1&lt;br /&gt;
&lt;br /&gt;
Above is a paper that proposes using statistical data to differentiate between legitimate and illegitimate traffic during a DDoS attack. While the paper proposes the statistics to be used for blocking bad traffic, the same logic can be applied to gathering evidence against the attackers of a DDoS. It gets pretty heavy into the statistical analysis, so it&#039;d probably be better to read the paper than me attempting to explain it. Basically, it&#039;s meant to detect a DDoS that is purposefully disguised as a traffic flood. This means that Justice can be properly served to malicious computers as opposed to too many computers wanting your resources.&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01219052&lt;br /&gt;
&lt;br /&gt;
Above is a paper that discusses the idea of computer forensics, and what is needed in order to gather and manage evidence. Although it is meant to be applied to a human level of judgement, computers may be capable of processing this evidence effectively. Logs maintained by routers and local devices may be used as evidence, provided that there be a way to encrypt the data in a way that preserves the original form. It also discusses the challenge of presenting computer related evidence to non-technical jurors, but this is not a concern for computer level management. All that is required for computer forensics to work is additional software being run on select computers to process and preserve any evidence gathered.&lt;br /&gt;
&lt;br /&gt;
=March 8=&lt;br /&gt;
*Transparency - keeping &amp;quot;rap sheets&amp;quot; on what systems are doing/have done. If you were wrongfully accused for participating in a malicious attack, this can be clarified&lt;br /&gt;
*Punishment - Computational puzzles for fighting unsolicited inbound traffic&lt;br /&gt;
*Morality - Systems get a &amp;quot;moral rating&amp;quot; that can go up or down. Based on this rating, more or less trust can be given to that system.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8250</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8250"/>
		<updated>2011-03-08T18:35:54Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Gathering Evidence */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Members==&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera (late addition, sorry guys)&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 01&amp;lt;/u&amp;gt;===&lt;br /&gt;
Early discussions on how we would define justice:&lt;br /&gt;
* what are the components of justice?&lt;br /&gt;
* should justice involve preventative measure or should it be strictly reactive?&lt;br /&gt;
&lt;br /&gt;
How would evidence be collected and logged?&lt;br /&gt;
&lt;br /&gt;
Discussions on what &amp;quot;punishment&amp;quot; means when referring to computers:&lt;br /&gt;
* What can we do to punish or penalize computers?&lt;br /&gt;
* Does it make sense to punish computers?&lt;br /&gt;
&lt;br /&gt;
Discussions on how human penal systems work:&lt;br /&gt;
* do we want computer justice to be used to dissuade deviant behaviour or should it be used to punish those who have committed &amp;quot;bad&amp;quot; acts?&lt;br /&gt;
* should we implement a system that catches/punishes all bad acts or just punish reported acts?&lt;br /&gt;
* how will we classify deviant behaviour?&lt;br /&gt;
** by the act itself &lt;br /&gt;
** by the results of the act&lt;br /&gt;
&lt;br /&gt;
Discussed how there would need to be some sort of hierarchical justice system with figure heads who manage justice activities for their specific region:&lt;br /&gt;
* collective internet justice: &amp;lt;b&amp;gt;Justice Web&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;JLA&amp;lt;/b&amp;gt; (Justice Link Assessment)&lt;br /&gt;
* each region patrolled by a justice managing unit:&lt;br /&gt;
** Internet Batman (Gotham), Internet Superman (Metropolis), etc.&lt;br /&gt;
&lt;br /&gt;
Divided the task of finding research papers into 3 sections:&lt;br /&gt;
* current ways to &amp;quot;punish&amp;quot; computers (Matthew)&lt;br /&gt;
* ways to collect, log, categorize evidence of inappropriate behaviour (Thomas)&lt;br /&gt;
* human methods of justice, various penal systems in our current and historical societies (Mike)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 03&amp;lt;/u&amp;gt;===&lt;br /&gt;
Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:&lt;br /&gt;
* punishing computers is difficult as computers do not care what task they are given, they just complete computations.&lt;br /&gt;
* punishing people is not really the focus we need as that is what human laws are for.&lt;br /&gt;
* if there is some way to punish a computer, does it make sense to punish computers that are being used for &amp;quot;bad&amp;quot; actions if the owner of the computer is unaware of this activity.&lt;br /&gt;
** does this punishment really have a greater effect on the owner of the computer than the computer itself?&lt;br /&gt;
&lt;br /&gt;
Our new focus is to try and narrow down if the concept of justice actually has a place in distributed computing:&lt;br /&gt;
* determine what purpose justice would serve...why would we have it?&lt;br /&gt;
** if we decide justice is a necessary concept, the focus will become what is a &amp;quot;fair&amp;quot; way to apply punishment for &amp;quot;bad&amp;quot; actions.&lt;br /&gt;
** if justice does not have a useful purpose then we must detail the reason that it is not beneficial.&lt;br /&gt;
&lt;br /&gt;
==Resources==&lt;br /&gt;
&lt;br /&gt;
[1]Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* This book provides a view of Justice that may serve the purpose of distributed computing. Rawls describes justice as serving two primary functions;      &lt;br /&gt;
       1. Assign rights and duties for the basic institutions of society.&lt;br /&gt;
       2. Describe the best way to distribute the benefits and burdens of society.&lt;br /&gt;
*If we take this view of justice, as opposed to a penalty-centric view, then justice may have a place in distributed computing. For our purposes, justice could be the basic guidelines to which all members of a distributed society must conform in order for the system to be stable and efficient. Obviously this view is an &amp;quot;all-in&amp;quot; type approach and may be more difficult to describe in terms of being incrementally deployable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* Foucault&#039;s book focuses on how punishment evolved from medevil methods &amp;quot;draw and quarter&amp;quot; to modern prison methods. These two methods of justiceare differentiated by the way in which punishment is carried out. For medevil, or &amp;quot;Monarchical Punishment&amp;quot;, the population is discouraged from doing bad acts by the public, and brutal, way that punishment is exacted. The punishments included torture and executions. On the other hand, Foucault discusses &amp;quot;Disciplinary Punishment&amp;quot; where there are people deemed as experts who have power over the perpetrator of a &amp;quot;bad&amp;quot; act and handle the punishment of the individual. An example of this is a prison guard who determines how long a prisoner stays in jail.&lt;br /&gt;
*For a distributed computing system, this provides a couple of ways that justice could be enforced. If we think of the general distributed system as a free zone in which computers can act how they wish but there are laws in place to describe &amp;quot;bad&amp;quot; acts. If a computer is caught and convicted of doing something against the described laws, then the computer could be tortured (forced to provide more resources to other computers), executed (completly removed from the system) or potentially placed under the care of a supervisor computer who will allow the &amp;quot;bad&amp;quot; computer to continue to participate in certain, restricted actions until the professional (supervisor) computer approves of releasing the &amp;quot;bad&amp;quot; computer back to the general system. The supervisor computer may actually be controlled by a human who is trying to resolve the issue on the offending computer.&lt;br /&gt;
* Another concept worth investigating is that of Foucault&#039;s &amp;quot;Panopticon&amp;quot; which is a prison in which everything can be seen. This can also be extended from the strictly prison sense to the level of daily interactions between people and the idea of shame. Most rules are followed because of the knowledge that those around you will see what you have done and their view of you will change, you will have a social stygma. If this is adopted by the computers, through some reputation mechanism, then maybe distributed computing relationships could be formed and altered based on the actions conducted by individual computers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* If we were going to add shame/stygma to computers, there would need to be some mechanism to manage what is good and what is bad. Nietzsche&#039;s work could provide a basis for this computer moral code as he describes two different forms of morality based on two different social position: &amp;quot;master-morality&amp;quot; and &amp;quot;slave-morality&amp;quot;.&lt;br /&gt;
** Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic.&lt;br /&gt;
** Slave-morality is split on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.&lt;br /&gt;
* Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
*If this morality was tied to the reputation component, then all computers would be able to know how other computers &amp;quot;socially&amp;quot; behave. This would further allow punishment methods, as described in the above Foucault section, to be handed out based on how &amp;quot;bad&amp;quot; a computer is and the affending computer can only be released when it&#039;s morality is deemed appropriate by the supervising (professional) computer.&lt;br /&gt;
&lt;br /&gt;
=Proof-of-work=&lt;br /&gt;
There has been a lot of research done in the area of computational puzzles to fight spam. The idea is that there is currently very little cost associated with sending spam (much less than .01c per email), so we want to make it a bit more &amp;quot;expensive&amp;quot; for spammers to achieve their goal. One solution is to have any email-sending computer perform some type of computational puzzle every time an email is sent. The result of the computation is appended to the email and can be verified by the recipient. One example is to find a string that when hashed gives a result smaller or larger than a specific value. You can statistically predict how long such a computation would take, and you could tweak it to be some particular value (10s, 1m, etc). &lt;br /&gt;
&lt;br /&gt;
I see this as being related to justice, because each self-governing entity can set up these proof-of-work requirements and adjust the difficulty for &amp;quot;trusted&amp;quot; entities and &amp;quot;untrusted&amp;quot; ones. The difficulty can also be increased for entities that misbehave, resulting in a kind of punishment. These punished systems would have to do more computation (e.g., 10m, 1hr) before they&#039;re allowed to communicate with someone else. &lt;br /&gt;
&lt;br /&gt;
I have some ideas on how you could technically do this, which we can discuss in class. And now some links:&lt;br /&gt;
&lt;br /&gt;
[https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6 https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6]&lt;br /&gt;
[http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx]&lt;br /&gt;
&lt;br /&gt;
=Gathering Evidence=&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&amp;amp;tag=1&lt;br /&gt;
&lt;br /&gt;
Above is a paper that proposes using statistical data to differentiate between legitimate and illegitimate traffic during a DDoS attack. While the paper proposes the statistics to be used for blocking bad traffic, the same logic can be applied to gathering evidence against the attackers of a DDoS. It gets pretty heavy into the statistical analysis, so it&#039;d probably be better to read the paper than me attempting to explain it. Basically, it&#039;s meant to detect a DDoS that is purposefully disguised as a traffic flood. This means that Justice can be properly served to malicious computers as opposed to too many computers wanting your resources.&lt;br /&gt;
&lt;br /&gt;
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01219052&lt;br /&gt;
&lt;br /&gt;
Above is a paper that discusses the idea of computer forensics, and what is needed in order to gather and manage evidence. Although it is meant to be applied to a human level of judgement, computers may be capable of processing this evidence effectively. Logs maintained by routers and local devices may be used as evidence, provided that there be a way to encrypt the data in a way that preserves the original form. It also discusses the challenge of presenting computer related evidence to non-technical jurors, but this is not a concern for computer level management. All that is required for computer forensics to work is additional software being run on select computers to process and preserve any evidence gathered.&lt;br /&gt;
&lt;br /&gt;
=March 8=&lt;br /&gt;
*Transparency - keeping &amp;quot;rap sheets&amp;quot; on what systems are doing/have done. If you were wrongfully accused for participating in a malicious attack, this can be clarified&lt;br /&gt;
*Punishment - Computational puzzles for fighting unsolicited inbound traffic&lt;br /&gt;
*&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8224</id>
		<title>DistOS-2011W Justice</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Justice&amp;diff=8224"/>
		<updated>2011-03-08T14:58:57Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Members==&lt;br /&gt;
* Matthew Chou&lt;br /&gt;
* Mike Preston&lt;br /&gt;
* Thomas McMahon&lt;br /&gt;
* David Barrera (late addition, sorry guys)&lt;br /&gt;
&lt;br /&gt;
==Meetings==&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 01&amp;lt;/u&amp;gt;===&lt;br /&gt;
Early discussions on how we would define justice:&lt;br /&gt;
* what are the components of justice?&lt;br /&gt;
* should justice involve preventative measure or should it be strictly reactive?&lt;br /&gt;
&lt;br /&gt;
How would evidence be collected and logged?&lt;br /&gt;
&lt;br /&gt;
Discussions on what &amp;quot;punishment&amp;quot; means when referring to computers:&lt;br /&gt;
* What can we do to punish or penalize computers?&lt;br /&gt;
* Does it make sense to punish computers?&lt;br /&gt;
&lt;br /&gt;
Discussions on how human penal systems work:&lt;br /&gt;
* do we want computer justice to be used to dissuade deviant behaviour or should it be used to punish those who have committed &amp;quot;bad&amp;quot; acts?&lt;br /&gt;
* should we implement a system that catches/punishes all bad acts or just punish reported acts?&lt;br /&gt;
* how will we classify deviant behaviour?&lt;br /&gt;
** by the act itself &lt;br /&gt;
** by the results of the act&lt;br /&gt;
&lt;br /&gt;
Discussed how there would need to be some sort of hierarchical justice system with figure heads who manage justice activities for their specific region:&lt;br /&gt;
* collective internet justice: &amp;lt;b&amp;gt;Justice Web&amp;lt;/b&amp;gt; or &amp;lt;b&amp;gt;JLA&amp;lt;/b&amp;gt; (Justice Link Assessment)&lt;br /&gt;
* each region patrolled by a justice managing unit:&lt;br /&gt;
** Internet Batman (Gotham), Internet Superman (Metropolis), etc.&lt;br /&gt;
&lt;br /&gt;
Divided the task of finding research papers into 3 sections:&lt;br /&gt;
* current ways to &amp;quot;punish&amp;quot; computers (Matthew)&lt;br /&gt;
* ways to collect, log, categorize evidence of inappropriate behaviour (Thomas)&lt;br /&gt;
* human methods of justice, various penal systems in our current and historical societies (Mike)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===&amp;lt;u&amp;gt;Mar 03&amp;lt;/u&amp;gt;===&lt;br /&gt;
Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:&lt;br /&gt;
* punishing computers is difficult as computers do not care what task they are given, they just complete computations.&lt;br /&gt;
* punishing people is not really the focus we need as that is what human laws are for.&lt;br /&gt;
* if there is some way to punish a computer, does it make sense to punish computers that are being used for &amp;quot;bad&amp;quot; actions if the owner of the computer is unaware of this activity.&lt;br /&gt;
** does this punishment really have a greater effect on the owner of the computer than the computer itself?&lt;br /&gt;
&lt;br /&gt;
Our new focus is to try and narrow down if the concept of justice actually has a place in distributed computing:&lt;br /&gt;
* determine what purpose justice would serve...why would we have it?&lt;br /&gt;
** if we decide justice is a necessary concept, the focus will become what is a &amp;quot;fair&amp;quot; way to apply punishment for &amp;quot;bad&amp;quot; actions.&lt;br /&gt;
** if justice does not have a useful purpose then we must detail the reason that it is not beneficial.&lt;br /&gt;
&lt;br /&gt;
==Resources==&lt;br /&gt;
&lt;br /&gt;
[1]Rawls, John, &amp;lt;i&amp;gt;A Theory of Justice: Revised Edition&amp;lt;/i&amp;gt;, Harvard University Press, 2003. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=kvpby7HtAe0C&amp;amp;oi=fnd&amp;amp;pg=PR11&amp;amp;dq=concepts+of+justice&amp;amp;ots=tggvx5zc67&amp;amp;sig=s4OHDBhkpDzumtlH0mIUO7cbCys#v=onepage&amp;amp;q=concepts%20of%20justice&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* This book provides a view of Justice that may serve the purpose of distributed computing. Rawls describes justice as serving two primary functions;      &lt;br /&gt;
       1. Assign rights and duties for the basic institutions of society.&lt;br /&gt;
       2. Describe the best way to distribute the benefits and burdens of society.&lt;br /&gt;
*If we take this view of justice, as opposed to a penalty-centric view, then justice may have a place in distributed computing. For our purposes, justice could be the basic guidelines to which all members of a distributed society must conform in order for the system to be stable and efficient. Obviously this view is an &amp;quot;all-in&amp;quot; type approach and may be more difficult to describe in terms of being incrementally deployable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[2]Foucault, Michel, &amp;lt;i&amp;gt;Discipline &amp;amp; Punish: The Birth of the Prison&amp;lt;/i&amp;gt;, Random House, New York, 1995. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=pWv1R2o_PWsC&amp;amp;oi=fnd&amp;amp;pg=PP9&amp;amp;dq=discipline+and+punish&amp;amp;ots=tL8pS67AAh&amp;amp;sig=L9PEOPaNiLAxYCkqF627K1la5Hw#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* Foucault&#039;s book focuses on how punishment evolved from medevil methods &amp;quot;draw and quarter&amp;quot; to modern prison methods. These two methods of justiceare differentiated by the way in which punishment is carried out. For medevil, or &amp;quot;Monarchical Punishment&amp;quot;, the population is discouraged from doing bad acts by the public, and brutal, way that punishment is exacted. The punishments included torture and executions. On the other hand, Foucault discusses &amp;quot;Disciplinary Punishment&amp;quot; where there are people deemed as experts who have power over the perpetrator of a &amp;quot;bad&amp;quot; act and handle the punishment of the individual. An example of this is a prison guard who determines how long a prisoner stays in jail.&lt;br /&gt;
*For a distributed computing system, this provides a couple of ways that justice could be enforced. If we think of the general distributed system as a free zone in which computers can act how they wish but there are laws in place to describe &amp;quot;bad&amp;quot; acts. If a computer is caught and convicted of doing something against the described laws, then the computer could be tortured (forced to provide more resources to other computers), executed (completly removed from the system) or potentially placed under the care of a supervisor computer who will allow the &amp;quot;bad&amp;quot; computer to continue to participate in certain, restricted actions until the professional (supervisor) computer approves of releasing the &amp;quot;bad&amp;quot; computer back to the general system. The supervisor computer may actually be controlled by a human who is trying to resolve the issue on the offending computer.&lt;br /&gt;
* Another concept worth investigating is that of Foucault&#039;s &amp;quot;Panopticon&amp;quot; which is a prison in which everything can be seen. This can also be extended from the strictly prison sense to the level of daily interactions between people and the idea of shame. Most rules are followed because of the knowledge that those around you will see what you have done and their view of you will change, you will have a social stygma. If this is adopted by the computers, through some reputation mechanism, then maybe distributed computing relationships could be formed and altered based on the actions conducted by individual computers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br\&amp;gt;&amp;lt;br\&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[3]Nietzsche, Friedrich, &amp;lt;i&amp;gt;Ecce Homo &amp;amp; The Anarchist&amp;lt;/i&amp;gt; translated by Thomas Wayne, New York, 2004. [http://books.google.ca/books?hl=en&amp;amp;lr=&amp;amp;id=xx6IfcqRvbwC&amp;amp;oi=fnd&amp;amp;pg=PA1&amp;amp;dq=ecce+homo&amp;amp;ots=44YGJlQ3Hb&amp;amp;sig=nzwX1IBP-Qj-TOnlmBOqGWUk810#v=onepage&amp;amp;q&amp;amp;f=false PDF] (preview copy)&lt;br /&gt;
* If we were going to add shame/stygma to computers, there would need to be some mechanism to manage what is good and what is bad. Nietzsche&#039;s work could provide a basis for this computer moral code as he describes two different forms of morality based on two different social position: &amp;quot;master-morality&amp;quot; and &amp;quot;slave-morality&amp;quot;.&lt;br /&gt;
** Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic.&lt;br /&gt;
** Slave-morality is split on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.&lt;br /&gt;
* Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more &amp;quot;good&amp;quot; than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam) then those computers would be considered as morally &amp;quot;bad&amp;quot;. Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don&#039;t care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.&lt;br /&gt;
*If this morality was tied to the reputation component, then all computers would be able to know how other computers &amp;quot;socially&amp;quot; behave. This would further allow punishment methods, as described in the above Foucault section, to be handed out based on how &amp;quot;bad&amp;quot; a computer is and the affending computer can only be released when it&#039;s morality is deemed appropriate by the supervising (professional) computer.&lt;br /&gt;
&lt;br /&gt;
=Proof-of-work=&lt;br /&gt;
There has been a lot of research done in the area of computational puzzles to fight spam. The idea is that there is currently very little cost associated with sending spam (much less than .01c per email), so we want to make it a bit more &amp;quot;expensive&amp;quot; for spammers to achieve their goal. One solution is to have any email-sending computer perform some type of computational puzzle every time an email is sent. The result of the computation is appended to the email and can be verified by the recipient. One example is to find a string that when hashed gives a result smaller or larger than a specific value. You can statistically predict how long such a computation would take, and you could tweak it to be some particular value (10s, 1m, etc). &lt;br /&gt;
&lt;br /&gt;
I see this as being related to justice, because each self-governing entity can set up these proof-of-work requirements and adjust the difficulty for &amp;quot;trusted&amp;quot; entities and &amp;quot;untrusted&amp;quot; ones. The difficulty can also be increased for entities that misbehave, resulting in a kind of punishment. These punished systems would have to do more computation (e.g., 10m, 1hr) before they&#039;re allowed to communicate with someone else. &lt;br /&gt;
&lt;br /&gt;
I have some ideas on how you could technically do this, which we can discuss in class. And now some links:&lt;br /&gt;
&lt;br /&gt;
[https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6 https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6]&lt;br /&gt;
[http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx]&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7467</id>
		<title>DistOS-2011W Naming and Locating Objects in Distributed Systems</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7467"/>
		<updated>2011-02-25T18:30:31Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;David Barrera&lt;br /&gt;
&lt;br /&gt;
dbarrera@ccsl.carleton.ca&lt;br /&gt;
&lt;br /&gt;
PDF available at [http://www.ccsl.carleton.ca/~dbarrera/distos.pdf http://www.ccsl.carleton.ca/~dbarrera/distos.pdf]&lt;br /&gt;
=Abstract=&lt;br /&gt;
This paper is a survey of existing approaches to naming and locating&lt;br /&gt;
resources in distributed file systems. We survey proposals from the past 20&lt;br /&gt;
years and find that while there have been many improvements in the hardware&lt;br /&gt;
that powers distributed file systems, there are only a few well known&lt;br /&gt;
proposals for dealing with resource location an naming.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
The ability to name resources is important in any file system. Mapping machine&lt;br /&gt;
readable names to human readable names allows users to forget about the way&lt;br /&gt;
the operating system (OS) is handling file access, and focus on completing&lt;br /&gt;
desired tasks. &lt;br /&gt;
&lt;br /&gt;
In traditional file systems, users are mostly responsible for creating&lt;br /&gt;
meaningful file hierarchies for storing and later searching for files. Users &lt;br /&gt;
must be aware of file system restrictions (e.g., file name length, file size,&lt;br /&gt;
etc.). The&lt;br /&gt;
underlying file system is only in charge of moving data to or from physical&lt;br /&gt;
storage media. Distributed file systems offer a series of advantages to users&lt;br /&gt;
(e.g., increased storage space and data reliability), but must be designed&lt;br /&gt;
such that end-users are not aware of all the logic and processing ocurring in&lt;br /&gt;
the background. Indeed, a distributed file system loses its appeal if the user&lt;br /&gt;
is required to do all the heavy lifting. &lt;br /&gt;
&lt;br /&gt;
Take for example, an end-user wanting to access a PDF document. In a local&lt;br /&gt;
file system, the user must only locate the PDF file in the file hierarchy, and&lt;br /&gt;
retrieve from disk. In a distributed file system, the PDF file might be&lt;br /&gt;
stored on a remote server, or perhaps stored multiple times on multiple&lt;br /&gt;
servers. The problem then becomes how to enable end-users to locate the&lt;br /&gt;
correct copy of a file amongst a large volume of shared data. &lt;br /&gt;
&lt;br /&gt;
This paper focuses on two important aspects of distributed file systems: (1)&lt;br /&gt;
how files are named or identified uniquely; and (2) how files are found by&lt;br /&gt;
clients or metadata servers once they are stored in the network. We survey&lt;br /&gt;
distributed file systems and file system designs from as early as 1989 and as&lt;br /&gt;
recently as 2006. We find that there are a relatively small number of ways a&lt;br /&gt;
distributed file system can approach the problem of naming and locating files,&lt;br /&gt;
and the selected approach is always dependent on the requirements of the&lt;br /&gt;
system. &lt;br /&gt;
&lt;br /&gt;
=Naming Resources=&lt;br /&gt;
On non-distributed systems (e.g., a stand-alone desktop computer), file systems&lt;br /&gt;
use&lt;br /&gt;
an object&#039;s absolute path as a unique identifier for that object in the file&lt;br /&gt;
system. This usually translates to meaning that there can&#039;t be two objects with&lt;br /&gt;
the same name in the same location (e.g., a directory&lt;br /&gt;
like =/home/dbarrera/files/=  can&#039;t contain two files called&lt;br /&gt;
=file1= ). In distributed file systems, there is an obvious need for&lt;br /&gt;
allowing multiple files with the same human-readable name, and perhaps even the&lt;br /&gt;
same absolute path (although relative to a particular client) as other clients&lt;br /&gt;
sharing storage on the system. This section reviews&lt;br /&gt;
methods used by existing distributed file systems to handle object naming at a&lt;br /&gt;
massive (sometimes global) scale. &lt;br /&gt;
&lt;br /&gt;
Depending on the requirements of the file system (maximum number of clients,&lt;br /&gt;
concurrent read/writes, etc.), different approaches to naming might be taken.&lt;br /&gt;
Some file systems such as Coda [10], aim to mimic the UNIX-like file&lt;br /&gt;
naming. Others systems have relaxed POSIX-like behaviour to allow for&lt;br /&gt;
better scalability and speed. &lt;br /&gt;
&lt;br /&gt;
==Local Naming==&lt;br /&gt;
The Sun Network File System (NFS) specifies that each client sees a UNIX file&lt;br /&gt;
namespace with a private root. Due to each client being free to manage&lt;br /&gt;
its own namespace, several workstations mounting the same remote directory&lt;br /&gt;
might not have the same view of the files contained in that directory. However,&lt;br /&gt;
if file-sharing or location transparency is required, it can be achieved by&lt;br /&gt;
convention (e.g., users agreeing on calling a file a specific name) rather than&lt;br /&gt;
by design. &lt;br /&gt;
&lt;br /&gt;
One of the first distributed file systems, the Apollo DOMAIN File System&lt;br /&gt;
[6] uses 64-bit unique identifiers (UIDs) for every object in the&lt;br /&gt;
system. Each Apollo client also has a UID created the time of its manufacture.&lt;br /&gt;
When a new file is created, the UID for that file is derived from the time and&lt;br /&gt;
UID of the file&#039;s workstation (this guarantees uniqueness of UIDs per fil&lt;br /&gt;
e without a&lt;br /&gt;
central server assigning them). &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses an internal 96-bit identifier for&lt;br /&gt;
uniquely identifying files. These identifiers are used in the background to&lt;br /&gt;
refer to files, but are never shown to users. Andrew clients see a partitioned&lt;br /&gt;
namespace comprised of a local and shared namespace. The shared namespace is&lt;br /&gt;
identical on all workstations, managed by a central server which can be&lt;br /&gt;
replicated. The local namespace is typically only used for files required to&lt;br /&gt;
boot an Andrew client, and to initialize the distributed client operation. &lt;br /&gt;
&lt;br /&gt;
==Cryptographic Naming==&lt;br /&gt;
OceanStore [5] stores objects at the lowest level by identifying&lt;br /&gt;
them with a&lt;br /&gt;
globally unique identifier (GUID). GUIDs are convenient in distributed&lt;br /&gt;
systems because they do not require a central authority to give them out. This&lt;br /&gt;
allows any client on the system to autonomously generate a valid GUID&lt;br /&gt;
with low probability of collisions (GUIDs are typically long bit strings e.g.,&lt;br /&gt;
more than 128 bits). At the same time, the benefit of an autonomous,&lt;br /&gt;
de-centralized namespace management allows for malicious clients to hijack&lt;br /&gt;
someone else&#039;s namespace and intentionally create collisions. To address this&lt;br /&gt;
issue, OceanStore uses a technique proposed by Mazieres et al. [7]&lt;br /&gt;
called&lt;br /&gt;
&#039;&#039;self-certifying path names&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
Self-certifying pathnames have all the benefits of public key cryptography&lt;br /&gt;
without the burden of key management, which is known to be difficult,&lt;br /&gt;
especially at a very large scale. One of the design goals of self-certifying&lt;br /&gt;
pathnames is for clients to cryptographically verify the contents of any file&lt;br /&gt;
on the network, without requiring exernal information. The novelty of this&lt;br /&gt;
approach is that file names inherently contain all information necessary to&lt;br /&gt;
communicate with remote servers. Essentially, an object&#039;s GUID is the secure&lt;br /&gt;
hash (SHA-1 or similar) of the object&#039;s owner&#039;s key and some human readable&lt;br /&gt;
name. By embedding a client key into the GUID, servers and other clients can&lt;br /&gt;
verify the identity and ownership of an object without querying a&lt;br /&gt;
third-party server.&lt;br /&gt;
&lt;br /&gt;
Freenet [2] also uses keypair-based naming but in a slightly&lt;br /&gt;
different way than OceanStore. Freenet identifies all files by a binary key&lt;br /&gt;
which is obtained by applying a hash function. There are three types of keys in&lt;br /&gt;
this distributed file system:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Keyword-signed key (KSK)&#039;&#039;&#039; This is the simplest identifier because it&lt;br /&gt;
is derived from an arbitrary text string chosen by the user who is storing the&lt;br /&gt;
file on the network. A user storing a PDF document might use the text string&lt;br /&gt;
&amp;quot;freenet/distributed/file/system&amp;quot; to describe the file. The string is used to&lt;br /&gt;
deterministically generate a private/public keypair. The public part of the key&lt;br /&gt;
is hashed and becomes the file identifier. &lt;br /&gt;
&lt;br /&gt;
We note that files can be recovered by guessing or bruteforcing the text&lt;br /&gt;
string. Also, nothing stops two different users from coming up with the same&lt;br /&gt;
descriptive string, and the second user&#039;s file would be rejected by the system,&lt;br /&gt;
as there would be a collision in the namespace.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Signed-subspace key (SSK)&#039;&#039;&#039; This method enables personal namespaces&lt;br /&gt;
for users. For this to work, users generate a public/private keypair using a&lt;br /&gt;
good random number generator. The user also creates a descriptive text string,&lt;br /&gt;
but in this case, it is XORed with the public key to generate the file key.&lt;br /&gt;
This method allows users to manage their own namespace (i.e., collisions can&lt;br /&gt;
still occur locally if the user picks the same string for two files). Users can&lt;br /&gt;
also&lt;br /&gt;
publish a list of keywords and a public key if they want to make those files&lt;br /&gt;
publicly available. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Content-hash key (CHK)&#039;&#039;&#039; In this method, the file key is derived by&lt;br /&gt;
hashing the contents of file. Files are also encrypted with a random encryption&lt;br /&gt;
key specific to that file. For others to retrieve the file, the owner makes&lt;br /&gt;
available the file hash along with the decryption key.&lt;br /&gt;
&lt;br /&gt;
==Hierarchical naming==&lt;br /&gt;
Cheriton et al. [1] suggest naming objects using a long&lt;br /&gt;
name which includes multiple pieces of information: (1) the resource&#039;s name&lt;br /&gt;
and location on the file server where it resides; (2) the organization where&lt;br /&gt;
that file server is located; and (3) a global administrative domain&lt;br /&gt;
representing all the organizations participating the distributed file system.&lt;br /&gt;
For example a file name of &amp;quot;[edu/standford/server4/bin/listdir&amp;quot;  is split&lt;br /&gt;
into:[edu (Gobal domain), /stanford/server4 (organization domain), and /bin/listdir (directory and file)&lt;br /&gt;
&lt;br /&gt;
This naming scheme gives clients all the necessary information (using only the&lt;br /&gt;
file name) to locate a file in a globally distributed file system. While this&lt;br /&gt;
may seem like a good solution, there a few inherent limitations to the&lt;br /&gt;
proposal.&lt;br /&gt;
&lt;br /&gt;
First, file replication and load balancing can only be done at the lowest level&lt;br /&gt;
(i.e., in the file server selected by the organization hosting the file). This&lt;br /&gt;
can lead to a bottleneck when multiple files in the same organization become&lt;br /&gt;
&amp;quot;hot&amp;quot;. The authors suggest using caching and multicast to improve performance&lt;br /&gt;
and avoid congestion on inter-organization links. Second, it requires all&lt;br /&gt;
organizations participating in the system to agree or regulate the common&lt;br /&gt;
namespace, much like the current Domain Name System (DNS). For this to work&lt;br /&gt;
there must be an organization in which each stakeholder in the system is&lt;br /&gt;
equally represented. While systems like these do exist currently (e.g.,&lt;br /&gt;
ICANN (The Internet Corporation for Assigned Names and Numbers (ICANN)&lt;br /&gt;
is a non-profit organization that represents regional registrars, the Internet&lt;br /&gt;
Engineering Task Force (IETF), Internet users and providers to help keep the&lt;br /&gt;
Internet secure, stable and inter-operable.)), they have large amounts of&lt;br /&gt;
administrative overhead and therefore limit the speed at which changes to&lt;br /&gt;
deployed implementations can take place. &lt;br /&gt;
&lt;br /&gt;
One advantage of the approach of Cheriton et al. is that names and directory&lt;br /&gt;
structures must only be unique within an organization/server. The system as a&lt;br /&gt;
whole does not have to keep track of every organization-level implementation,&lt;br /&gt;
yet different organizations should still be able to exchange data.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
The Google File System (GFS) [3] takes a different approach to&lt;br /&gt;
naming files. GFS assumes that all the clients communicate with a single master&lt;br /&gt;
server, who keeps a table mapping full pathnames to metadata (file locks and&lt;br /&gt;
location). The namespace is therefore centrally managed, and all clients must&lt;br /&gt;
register file operations with the master before they can be performed. While&lt;br /&gt;
this architecture has an obvious central point of failure (which can be&lt;br /&gt;
addressed by replication), it has the advantage of not having to deal with a&lt;br /&gt;
distributed namespace. This central design also has the advantage of improving&lt;br /&gt;
data consistency across multi-level distribution nodes. It also allows data&lt;br /&gt;
to be moved to optimal nodes to increase performance or distribute load. It&#039;s&lt;br /&gt;
worth noting that lookup tables are a fundamentally different way to find&lt;br /&gt;
contents in a directory as compared to UNIX &#039;&#039;inodes&#039;&#039;  and related data&lt;br /&gt;
structures. This approach has inherent limitations such as not being able to&lt;br /&gt;
support symlinks .&lt;br /&gt;
&lt;br /&gt;
Ceph [11] client nodes use near-POSIX file system interfaces which are&lt;br /&gt;
relayed back to a central metadata cluster. The metadata cluster is responsible&lt;br /&gt;
for managing the system-wide namespace, coordinating security and verifying&lt;br /&gt;
consistency. Ceph decouples data from metadata which enables the system to also&lt;br /&gt;
distribute metadata servers themselves. The metadata servers store pointers to&lt;br /&gt;
&amp;quot;object-storage clusters&amp;quot; which hold the actual data portion of the file. The&lt;br /&gt;
metadata servers also handle file read and write operations, which then&lt;br /&gt;
redirect clients to the appropriate object storage cluster or device. &lt;br /&gt;
&lt;br /&gt;
=Locating Resources=&lt;br /&gt;
&lt;br /&gt;
==Local File Systems==&lt;br /&gt;
In some distributed systems, files are copied locally and replicated to remote&lt;br /&gt;
servers in the background. NFS [9] is one example where clients&lt;br /&gt;
mount the remote file system locally. The remote directory structure is mapped&lt;br /&gt;
on to a local namespace which makes files transparently accessible to&lt;br /&gt;
clients. In this scheme, there is no need for distributing indexes or metadata,&lt;br /&gt;
since all files appear to be local. A client can find files on the&lt;br /&gt;
&amp;quot;distributed&amp;quot; file system in the same way local files are found.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
File systems  that use lookup tables for storing the&lt;br /&gt;
location and&lt;br /&gt;
metadatada of files (e.g., [3,11]) can locate resources trivially&lt;br /&gt;
by&lt;br /&gt;
querying the lookup table. The table usually contains a pointer to either the&lt;br /&gt;
file itself or a server hosting that file who can in turn handle the file&lt;br /&gt;
operation request. &lt;br /&gt;
&lt;br /&gt;
A very basic implementation of a metadata lookup is used in the Apollo Domain&lt;br /&gt;
File System [6]. A central name server maps client-readable strings&lt;br /&gt;
(e.g., &amp;quot;/home/dbarrera/file1&amp;quot; ) to UIDs. The name server can be&lt;br /&gt;
distributed by replicating it a multiple locations, allowing clients to query&lt;br /&gt;
the nearest server instead of a central one. &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses unique file identifiers to &lt;br /&gt;
populate a &#039;&#039;location database&#039;&#039;  on the central server which maps file&lt;br /&gt;
identifiers to locations. The server is therefore responsible for forwarding&lt;br /&gt;
file access requests to the correct client hosting that file.&lt;br /&gt;
&lt;br /&gt;
==Distributed Index Search==&lt;br /&gt;
Systems like Freenet [2] by design want to make it difficult for&lt;br /&gt;
unauthorized users to access restricted files. This is a difficult problem,&lt;br /&gt;
since the system aims to be highly distributed, but at the same time provide&lt;br /&gt;
guarantees that files won&#039;t be read or modified by unauthorized third-parties.&lt;br /&gt;
However, Freenet has developed an interesting approach to locating files: when&lt;br /&gt;
a file is requested from the network, a user must first obtain or calculate the&lt;br /&gt;
file key. The user&#039;s node requests that file&lt;br /&gt;
from neighboring nodes, who in turn check if the file is stored locally, and if&lt;br /&gt;
not forward the request to the next nearest neighbor. If a node cannot forward&lt;br /&gt;
a request any longer (because a loop would be created or all nodes have&lt;br /&gt;
already been queried), then a failure message is transmitted back to the&lt;br /&gt;
previous node. If a file is found at some point along the request path,&lt;br /&gt;
then the file is sent back through all the intermediate nodes until it reaches&lt;br /&gt;
the request originator, which allows these intermediate nodes to keep a copy of&lt;br /&gt;
the file as a cache. The next time that file key is requested, a node which is&lt;br /&gt;
closer might have it, which will increase the retrieval speed. Nodes&lt;br /&gt;
&amp;quot;forget&amp;quot; about cached copies of files in a least recently used (LRU) manner,&lt;br /&gt;
allowing the network to automatically  balance load and use available space&lt;br /&gt;
optimally. &lt;br /&gt;
&lt;br /&gt;
Distributing a file index was proposed Plaxton et al. [8] as well.&lt;br /&gt;
Their proposal however attempts have all nodes in the network maintain a&lt;br /&gt;
&#039;&#039;virtual tree&#039;&#039; . The tree information is distributed such that each node&lt;br /&gt;
knows about copies of files residing on itself and all nodes that form the&lt;br /&gt;
subtree rooted at that node. All nodes are constantly being updated with&lt;br /&gt;
neighbor information, meaning that new nodes slowly obtain tree information to&lt;br /&gt;
become the roots of their subtrees. This method has the advantage of&lt;br /&gt;
distributing load and providing a hierarchical search functionality that can&lt;br /&gt;
use well known algorithms (BFS, DFS) to find resources on a network.&lt;br /&gt;
&lt;br /&gt;
==Pseudo-random Data Distribution==&lt;br /&gt;
Ceph [11] distributes data through a method that maximizes bandwidth and&lt;br /&gt;
efficiently uses storage resources. Ceph also avoids data imbalance (e.g.,&lt;br /&gt;
new devices are under-used) and load-asymmetries (e.g., often requested data&lt;br /&gt;
placed on only new devices) with a globally known algorithm called CRUSH&lt;br /&gt;
(Controlled Replication Under Scalable Hashing). By using a predefined number&lt;br /&gt;
of &#039;&#039;placement groups&#039;&#039;  (the smallest unit of object storage groups), the&lt;br /&gt;
CRUSH algorithm stores and replicates data across the network in a&lt;br /&gt;
pseudo-random way. This algorithm tells the metadata servers both where the&lt;br /&gt;
data should be stored and where it can be found later, which helps clients and&lt;br /&gt;
metadata servers in locating resources. &lt;br /&gt;
&lt;br /&gt;
=Conclusions=&lt;br /&gt;
This paper has presented a brief survey of distributed file system research&lt;br /&gt;
conducted over the past 20 years. A wide range of distributed file systems have&lt;br /&gt;
been designed to have varying levels of scalability, usability and efficiency.&lt;br /&gt;
Depending on the requirements of a distributed file system, different approaches&lt;br /&gt;
may be taken to address two main concerns: file naming and file retrieval.&lt;br /&gt;
Unfortunately there is no clear winner in either of these categories, which&lt;br /&gt;
means that selecting the &amp;quot;right&amp;quot; method for a given file system will always&lt;br /&gt;
depend on the requirements and users of that system.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] D. R. Cheriton and T. P. Mann. Decentralizing a global naming service for improved performance and fault tolerance. ACM Transactions on Computer Systems, 7:147–183, 1989.&lt;br /&gt;
&lt;br /&gt;
[2] I. Clarke, O. Sandberg, B. Wiley, and T. Hong. Freenet: A distributed anonymous information storage and retrieval system. In Designing Privacy Enhancing Technologies, pages 46–66. Springer, 2001.&lt;br /&gt;
&lt;br /&gt;
[3] S. Ghemawat, H. Gobioﬀ, and S. Leung. The Google ﬁle system. ACM SIGOPS Operating Systems Review, 37(5):29–43, 2003.&lt;br /&gt;
&lt;br /&gt;
[4] J. Howard and C.-M. U. I. T. Center. An overview of the Andrew ﬁle system. Citeseer, 1988.&lt;br /&gt;
&lt;br /&gt;
[5] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weatherspoon, C. Wells, et al. Oceanstore: An architecture for global-scale persistent storage. ACM SIGARCH Computer Architecture News, 28(5):190–201, 2000.&lt;br /&gt;
&lt;br /&gt;
[6] P. Levine. The Apollo DOMAIN Distributed File System. NATO ASI Series: Theory and Practice of Distributed Operating Systems, Y. Paker, JP. Banatre, M. Bozyi git, pages 241–260.&lt;br /&gt;
&lt;br /&gt;
[7] D. Mazieres, M. Kaminsky, M. Kaashoek, and E. Witchel. Separating key management from ﬁle system security. ACM SIGOPS Operating Systems Review, 33(5):124–139, 1999.&lt;br /&gt;
&lt;br /&gt;
[8] C. G. Plaxton, R. Rajaraman, A. W. Richa, and A. W. Richa. Accessing nearby copies of replicated objects in a distributed environment. pages 311–320, 1997.&lt;br /&gt;
&lt;br /&gt;
[9] M. Satyanarayanan. A survey of distributed ﬁle systems. Annual Review of Computer Science, 4(1):73–104, 1990.&lt;br /&gt;
&lt;br /&gt;
[10] M. Satyanarayanan, J. Kistler, P. Kumar, M. Okasaki, E. Siegel, and D. Steere. Coda: a highly available file system for a distributed workstation environment. Computers, IEEE Transactions on, 39(4):447–459, Apr. 1990.&lt;br /&gt;
&lt;br /&gt;
[11] S. Weil, S. Brandt, E. Miller, D. Long, and C. Maltzahn. Ceph: A scalable, high-performance distributed file system. In Proceedings of the 7th symposium on Operating systems design and implementation, pages 307–320. USENIX Association, 2006.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7466</id>
		<title>DistOS-2011W Naming and Locating Objects in Distributed Systems</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7466"/>
		<updated>2011-02-25T18:28:29Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;David Barrera&lt;br /&gt;
dbarrera@ccsl.carleton.ca&lt;br /&gt;
PDF available at [http://www.ccsl.carleton.ca/~dbarrera/distos.pdf]&lt;br /&gt;
=Abstract=&lt;br /&gt;
This paper is a survey of existing approaches to naming and locating&lt;br /&gt;
resources in distributed file systems. We survey proposals from the past 20&lt;br /&gt;
years and find that while there have been many improvements in the hardware&lt;br /&gt;
that powers distributed file systems, there are only a few well known&lt;br /&gt;
proposals for dealing with resource location an naming.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
The ability to name resources is important in any file system. Mapping machine&lt;br /&gt;
readable names to human readable names allows users to forget about the way&lt;br /&gt;
the operating system (OS) is handling file access, and focus on completing&lt;br /&gt;
desired tasks. &lt;br /&gt;
&lt;br /&gt;
In traditional file systems, users are mostly responsible for creating&lt;br /&gt;
meaningful file hierarchies for storing and later searching for files. Users &lt;br /&gt;
must be aware of file system restrictions (e.g., file name length, file size,&lt;br /&gt;
etc.). The&lt;br /&gt;
underlying file system is only in charge of moving data to or from physical&lt;br /&gt;
storage media. Distributed file systems offer a series of advantages to users&lt;br /&gt;
(e.g., increased storage space and data reliability), but must be designed&lt;br /&gt;
such that end-users are not aware of all the logic and processing ocurring in&lt;br /&gt;
the background. Indeed, a distributed file system loses its appeal if the user&lt;br /&gt;
is required to do all the heavy lifting. &lt;br /&gt;
&lt;br /&gt;
Take for example, an end-user wanting to access a PDF document. In a local&lt;br /&gt;
file system, the user must only locate the PDF file in the file hierarchy, and&lt;br /&gt;
retrieve from disk. In a distributed file system, the PDF file might be&lt;br /&gt;
stored on a remote server, or perhaps stored multiple times on multiple&lt;br /&gt;
servers. The problem then becomes how to enable end-users to locate the&lt;br /&gt;
correct copy of a file amongst a large volume of shared data. &lt;br /&gt;
&lt;br /&gt;
This paper focuses on two important aspects of distributed file systems: (1)&lt;br /&gt;
how files are named or identified uniquely; and (2) how files are found by&lt;br /&gt;
clients or metadata servers once they are stored in the network. We survey&lt;br /&gt;
distributed file systems and file system designs from as early as 1989 and as&lt;br /&gt;
recently as 2006. We find that there are a relatively small number of ways a&lt;br /&gt;
distributed file system can approach the problem of naming and locating files,&lt;br /&gt;
and the selected approach is always dependent on the requirements of the&lt;br /&gt;
system. &lt;br /&gt;
&lt;br /&gt;
=Naming Resources=&lt;br /&gt;
On non-distributed systems (e.g., a stand-alone desktop computer), file systems&lt;br /&gt;
use&lt;br /&gt;
an object&#039;s absolute path as a unique identifier for that object in the file&lt;br /&gt;
system. This usually translates to meaning that there can&#039;t be two objects with&lt;br /&gt;
the same name in the same location (e.g., a directory&lt;br /&gt;
like =/home/dbarrera/files/=  can&#039;t contain two files called&lt;br /&gt;
=file1= ). In distributed file systems, there is an obvious need for&lt;br /&gt;
allowing multiple files with the same human-readable name, and perhaps even the&lt;br /&gt;
same absolute path (although relative to a particular client) as other clients&lt;br /&gt;
sharing storage on the system. This section reviews&lt;br /&gt;
methods used by existing distributed file systems to handle object naming at a&lt;br /&gt;
massive (sometimes global) scale. &lt;br /&gt;
&lt;br /&gt;
Depending on the requirements of the file system (maximum number of clients,&lt;br /&gt;
concurrent read/writes, etc.), different approaches to naming might be taken.&lt;br /&gt;
Some file systems such as Coda [10], aim to mimic the UNIX-like file&lt;br /&gt;
naming. Others systems have relaxed POSIX-like behaviour to allow for&lt;br /&gt;
better scalability and speed. &lt;br /&gt;
&lt;br /&gt;
==Local Naming==&lt;br /&gt;
The Sun Network File System (NFS) specifies that each client sees a UNIX file&lt;br /&gt;
namespace with a private root. Due to each client being free to manage&lt;br /&gt;
its own namespace, several workstations mounting the same remote directory&lt;br /&gt;
might not have the same view of the files contained in that directory. However,&lt;br /&gt;
if file-sharing or location transparency is required, it can be achieved by&lt;br /&gt;
convention (e.g., users agreeing on calling a file a specific name) rather than&lt;br /&gt;
by design. &lt;br /&gt;
&lt;br /&gt;
One of the first distributed file systems, the Apollo DOMAIN File System&lt;br /&gt;
[6] uses 64-bit unique identifiers (UIDs) for every object in the&lt;br /&gt;
system. Each Apollo client also has a UID created the time of its manufacture.&lt;br /&gt;
When a new file is created, the UID for that file is derived from the time and&lt;br /&gt;
UID of the file&#039;s workstation (this guarantees uniqueness of UIDs per fil&lt;br /&gt;
e without a&lt;br /&gt;
central server assigning them). &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses an internal 96-bit identifier for&lt;br /&gt;
uniquely identifying files. These identifiers are used in the background to&lt;br /&gt;
refer to files, but are never shown to users. Andrew clients see a partitioned&lt;br /&gt;
namespace comprised of a local and shared namespace. The shared namespace is&lt;br /&gt;
identical on all workstations, managed by a central server which can be&lt;br /&gt;
replicated. The local namespace is typically only used for files required to&lt;br /&gt;
boot an Andrew client, and to initialize the distributed client operation. &lt;br /&gt;
&lt;br /&gt;
==Cryptographic Naming==&lt;br /&gt;
OceanStore [5] stores objects at the lowest level by identifying&lt;br /&gt;
them with a&lt;br /&gt;
globally unique identifier (GUID). GUIDs are convenient in distributed&lt;br /&gt;
systems because they do not require a central authority to give them out. This&lt;br /&gt;
allows any client on the system to autonomously generate a valid GUID&lt;br /&gt;
with low probability of collisions (GUIDs are typically long bit strings e.g.,&lt;br /&gt;
more than 128 bits). At the same time, the benefit of an autonomous,&lt;br /&gt;
de-centralized namespace management allows for malicious clients to hijack&lt;br /&gt;
someone else&#039;s namespace and intentionally create collisions. To address this&lt;br /&gt;
issue, OceanStore uses a technique proposed by Mazieres et al. [7]&lt;br /&gt;
called&lt;br /&gt;
&#039;&#039;self-certifying path names&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
Self-certifying pathnames have all the benefits of public key cryptography&lt;br /&gt;
without the burden of key management, which is known to be difficult,&lt;br /&gt;
especially at a very large scale. One of the design goals of self-certifying&lt;br /&gt;
pathnames is for clients to cryptographically verify the contents of any file&lt;br /&gt;
on the network, without requiring exernal information. The novelty of this&lt;br /&gt;
approach is that file names inherently contain all information necessary to&lt;br /&gt;
communicate with remote servers. Essentially, an object&#039;s GUID is the secure&lt;br /&gt;
hash (SHA-1 or similar) of the object&#039;s owner&#039;s key and some human readable&lt;br /&gt;
name. By embedding a client key into the GUID, servers and other clients can&lt;br /&gt;
verify the identity and ownership of an object without querying a&lt;br /&gt;
third-party server.&lt;br /&gt;
&lt;br /&gt;
Freenet [2] also uses keypair-based naming but in a slightly&lt;br /&gt;
different way than OceanStore. Freenet identifies all files by a binary key&lt;br /&gt;
which is obtained by applying a hash function. There are three types of keys in&lt;br /&gt;
this distributed file system:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Keyword-signed key (KSK)&#039;&#039;&#039; This is the simplest identifier because it&lt;br /&gt;
is derived from an arbitrary text string chosen by the user who is storing the&lt;br /&gt;
file on the network. A user storing a PDF document might use the text string&lt;br /&gt;
&amp;quot;freenet/distributed/file/system&amp;quot; to describe the file. The string is used to&lt;br /&gt;
deterministically generate a private/public keypair. The public part of the key&lt;br /&gt;
is hashed and becomes the file identifier. &lt;br /&gt;
&lt;br /&gt;
We note that files can be recovered by guessing or bruteforcing the text&lt;br /&gt;
string. Also, nothing stops two different users from coming up with the same&lt;br /&gt;
descriptive string, and the second user&#039;s file would be rejected by the system,&lt;br /&gt;
as there would be a collision in the namespace.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Signed-subspace key (SSK)&#039;&#039;&#039; This method enables personal namespaces&lt;br /&gt;
for users. For this to work, users generate a public/private keypair using a&lt;br /&gt;
good random number generator. The user also creates a descriptive text string,&lt;br /&gt;
but in this case, it is XORed with the public key to generate the file key.&lt;br /&gt;
This method allows users to manage their own namespace (i.e., collisions can&lt;br /&gt;
still occur locally if the user picks the same string for two files). Users can&lt;br /&gt;
also&lt;br /&gt;
publish a list of keywords and a public key if they want to make those files&lt;br /&gt;
publicly available. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Content-hash key (CHK)&#039;&#039;&#039; In this method, the file key is derived by&lt;br /&gt;
hashing the contents of file. Files are also encrypted with a random encryption&lt;br /&gt;
key specific to that file. For others to retrieve the file, the owner makes&lt;br /&gt;
available the file hash along with the decryption key.&lt;br /&gt;
&lt;br /&gt;
==Hierarchical naming==&lt;br /&gt;
Cheriton et al. [1] suggest naming objects using a long&lt;br /&gt;
name which includes multiple pieces of information: (1) the resource&#039;s name&lt;br /&gt;
and location on the file server where it resides; (2) the organization where&lt;br /&gt;
that file server is located; and (3) a global administrative domain&lt;br /&gt;
representing all the organizations participating the distributed file system.&lt;br /&gt;
For example a file name of &amp;quot;[edu/standford/server4/bin/listdir&amp;quot;  is split&lt;br /&gt;
into:[edu (Gobal domain), /stanford/server4 (organization domain), and /bin/listdir (directory and file)&lt;br /&gt;
&lt;br /&gt;
This naming scheme gives clients all the necessary information (using only the&lt;br /&gt;
file name) to locate a file in a globally distributed file system. While this&lt;br /&gt;
may seem like a good solution, there a few inherent limitations to the&lt;br /&gt;
proposal.&lt;br /&gt;
&lt;br /&gt;
First, file replication and load balancing can only be done at the lowest level&lt;br /&gt;
(i.e., in the file server selected by the organization hosting the file). This&lt;br /&gt;
can lead to a bottleneck when multiple files in the same organization become&lt;br /&gt;
&amp;quot;hot&amp;quot;. The authors suggest using caching and multicast to improve performance&lt;br /&gt;
and avoid congestion on inter-organization links. Second, it requires all&lt;br /&gt;
organizations participating in the system to agree or regulate the common&lt;br /&gt;
namespace, much like the current Domain Name System (DNS). For this to work&lt;br /&gt;
there must be an organization in which each stakeholder in the system is&lt;br /&gt;
equally represented. While systems like these do exist currently (e.g.,&lt;br /&gt;
ICANN (The Internet Corporation for Assigned Names and Numbers (ICANN)&lt;br /&gt;
is a non-profit organization that represents regional registrars, the Internet&lt;br /&gt;
Engineering Task Force (IETF), Internet users and providers to help keep the&lt;br /&gt;
Internet secure, stable and inter-operable.)), they have large amounts of&lt;br /&gt;
administrative overhead and therefore limit the speed at which changes to&lt;br /&gt;
deployed implementations can take place. &lt;br /&gt;
&lt;br /&gt;
One advantage of the approach of Cheriton et al. is that names and directory&lt;br /&gt;
structures must only be unique within an organization/server. The system as a&lt;br /&gt;
whole does not have to keep track of every organization-level implementation,&lt;br /&gt;
yet different organizations should still be able to exchange data.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
The Google File System (GFS) [3] takes a different approach to&lt;br /&gt;
naming files. GFS assumes that all the clients communicate with a single master&lt;br /&gt;
server, who keeps a table mapping full pathnames to metadata (file locks and&lt;br /&gt;
location). The namespace is therefore centrally managed, and all clients must&lt;br /&gt;
register file operations with the master before they can be performed. While&lt;br /&gt;
this architecture has an obvious central point of failure (which can be&lt;br /&gt;
addressed by replication), it has the advantage of not having to deal with a&lt;br /&gt;
distributed namespace. This central design also has the advantage of improving&lt;br /&gt;
data consistency across multi-level distribution nodes. It also allows data&lt;br /&gt;
to be moved to optimal nodes to increase performance or distribute load. It&#039;s&lt;br /&gt;
worth noting that lookup tables are a fundamentally different way to find&lt;br /&gt;
contents in a directory as compared to UNIX &#039;&#039;inodes&#039;&#039;  and related data&lt;br /&gt;
structures. This approach has inherent limitations such as not being able to&lt;br /&gt;
support symlinks .&lt;br /&gt;
&lt;br /&gt;
Ceph [11] client nodes use near-POSIX file system interfaces which are&lt;br /&gt;
relayed back to a central metadata cluster. The metadata cluster is responsible&lt;br /&gt;
for managing the system-wide namespace, coordinating security and verifying&lt;br /&gt;
consistency. Ceph decouples data from metadata which enables the system to also&lt;br /&gt;
distribute metadata servers themselves. The metadata servers store pointers to&lt;br /&gt;
&amp;quot;object-storage clusters&amp;quot; which hold the actual data portion of the file. The&lt;br /&gt;
metadata servers also handle file read and write operations, which then&lt;br /&gt;
redirect clients to the appropriate object storage cluster or device. &lt;br /&gt;
&lt;br /&gt;
=Locating Resources=&lt;br /&gt;
&lt;br /&gt;
==Local File Systems==&lt;br /&gt;
In some distributed systems, files are copied locally and replicated to remote&lt;br /&gt;
servers in the background. NFS [9] is one example where clients&lt;br /&gt;
mount the remote file system locally. The remote directory structure is mapped&lt;br /&gt;
on to a local namespace which makes files transparently accessible to&lt;br /&gt;
clients. In this scheme, there is no need for distributing indexes or metadata,&lt;br /&gt;
since all files appear to be local. A client can find files on the&lt;br /&gt;
&amp;quot;distributed&amp;quot; file system in the same way local files are found.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
File systems  that use lookup tables for storing the&lt;br /&gt;
location and&lt;br /&gt;
metadatada of files (e.g., [3,11]) can locate resources trivially&lt;br /&gt;
by&lt;br /&gt;
querying the lookup table. The table usually contains a pointer to either the&lt;br /&gt;
file itself or a server hosting that file who can in turn handle the file&lt;br /&gt;
operation request. &lt;br /&gt;
&lt;br /&gt;
A very basic implementation of a metadata lookup is used in the Apollo Domain&lt;br /&gt;
File System [6]. A central name server maps client-readable strings&lt;br /&gt;
(e.g., &amp;quot;/home/dbarrera/file1&amp;quot; ) to UIDs. The name server can be&lt;br /&gt;
distributed by replicating it a multiple locations, allowing clients to query&lt;br /&gt;
the nearest server instead of a central one. &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses unique file identifiers to &lt;br /&gt;
populate a &#039;&#039;location database&#039;&#039;  on the central server which maps file&lt;br /&gt;
identifiers to locations. The server is therefore responsible for forwarding&lt;br /&gt;
file access requests to the correct client hosting that file.&lt;br /&gt;
&lt;br /&gt;
==Distributed Index Search==&lt;br /&gt;
Systems like Freenet [2] by design want to make it difficult for&lt;br /&gt;
unauthorized users to access restricted files. This is a difficult problem,&lt;br /&gt;
since the system aims to be highly distributed, but at the same time provide&lt;br /&gt;
guarantees that files won&#039;t be read or modified by unauthorized third-parties.&lt;br /&gt;
However, Freenet has developed an interesting approach to locating files: when&lt;br /&gt;
a file is requested from the network, a user must first obtain or calculate the&lt;br /&gt;
file key. The user&#039;s node requests that file&lt;br /&gt;
from neighboring nodes, who in turn check if the file is stored locally, and if&lt;br /&gt;
not forward the request to the next nearest neighbor. If a node cannot forward&lt;br /&gt;
a request any longer (because a loop would be created or all nodes have&lt;br /&gt;
already been queried), then a failure message is transmitted back to the&lt;br /&gt;
previous node. If a file is found at some point along the request path,&lt;br /&gt;
then the file is sent back through all the intermediate nodes until it reaches&lt;br /&gt;
the request originator, which allows these intermediate nodes to keep a copy of&lt;br /&gt;
the file as a cache. The next time that file key is requested, a node which is&lt;br /&gt;
closer might have it, which will increase the retrieval speed. Nodes&lt;br /&gt;
&amp;quot;forget&amp;quot; about cached copies of files in a least recently used (LRU) manner,&lt;br /&gt;
allowing the network to automatically  balance load and use available space&lt;br /&gt;
optimally. &lt;br /&gt;
&lt;br /&gt;
Distributing a file index was proposed Plaxton et al. [8] as well.&lt;br /&gt;
Their proposal however attempts have all nodes in the network maintain a&lt;br /&gt;
&#039;&#039;virtual tree&#039;&#039; . The tree information is distributed such that each node&lt;br /&gt;
knows about copies of files residing on itself and all nodes that form the&lt;br /&gt;
subtree rooted at that node. All nodes are constantly being updated with&lt;br /&gt;
neighbor information, meaning that new nodes slowly obtain tree information to&lt;br /&gt;
become the roots of their subtrees. This method has the advantage of&lt;br /&gt;
distributing load and providing a hierarchical search functionality that can&lt;br /&gt;
use well known algorithms (BFS, DFS) to find resources on a network.&lt;br /&gt;
&lt;br /&gt;
==Pseudo-random Data Distribution==&lt;br /&gt;
Ceph [11] distributes data through a method that maximizes bandwidth and&lt;br /&gt;
efficiently uses storage resources. Ceph also avoids data imbalance (e.g.,&lt;br /&gt;
new devices are under-used) and load-asymmetries (e.g., often requested data&lt;br /&gt;
placed on only new devices) with a globally known algorithm called CRUSH&lt;br /&gt;
(Controlled Replication Under Scalable Hashing). By using a predefined number&lt;br /&gt;
of &#039;&#039;placement groups&#039;&#039;  (the smallest unit of object storage groups), the&lt;br /&gt;
CRUSH algorithm stores and replicates data across the network in a&lt;br /&gt;
pseudo-random way. This algorithm tells the metadata servers both where the&lt;br /&gt;
data should be stored and where it can be found later, which helps clients and&lt;br /&gt;
metadata servers in locating resources. &lt;br /&gt;
&lt;br /&gt;
=Conclusions=&lt;br /&gt;
This paper has presented a brief survey of distributed file system research&lt;br /&gt;
conducted over the past 20 years. A wide range of distributed file systems have&lt;br /&gt;
been designed to have varying levels of scalability, usability and efficiency.&lt;br /&gt;
Depending on the requirements of a distributed file system, different approaches&lt;br /&gt;
may be taken to address two main concerns: file naming and file retrieval.&lt;br /&gt;
Unfortunately there is no clear winner in either of these categories, which&lt;br /&gt;
means that selecting the &amp;quot;right&amp;quot; method for a given file system will always&lt;br /&gt;
depend on the requirements and users of that system.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] D. R. Cheriton and T. P. Mann. Decentralizing a global naming service for improved performance and fault tolerance. ACM Transactions on Computer Systems, 7:147–183, 1989.&lt;br /&gt;
&lt;br /&gt;
[2] I. Clarke, O. Sandberg, B. Wiley, and T. Hong. Freenet: A distributed anonymous information storage and retrieval system. In Designing Privacy Enhancing Technologies, pages 46–66. Springer, 2001.&lt;br /&gt;
&lt;br /&gt;
[3] S. Ghemawat, H. Gobioﬀ, and S. Leung. The Google ﬁle system. ACM SIGOPS Operating Systems Review, 37(5):29–43, 2003.&lt;br /&gt;
&lt;br /&gt;
[4] J. Howard and C.-M. U. I. T. Center. An overview of the Andrew ﬁle system. Citeseer, 1988.&lt;br /&gt;
&lt;br /&gt;
[5] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weatherspoon, C. Wells, et al. Oceanstore: An architecture for global-scale persistent storage. ACM SIGARCH Computer Architecture News, 28(5):190–201, 2000.&lt;br /&gt;
&lt;br /&gt;
[6] P. Levine. The Apollo DOMAIN Distributed File System. NATO ASI Series: Theory and Practice of Distributed Operating Systems, Y. Paker, JP. Banatre, M. Bozyi git, pages 241–260.&lt;br /&gt;
&lt;br /&gt;
[7] D. Mazieres, M. Kaminsky, M. Kaashoek, and E. Witchel. Separating key management from ﬁle system security. ACM SIGOPS Operating Systems Review, 33(5):124–139, 1999.&lt;br /&gt;
&lt;br /&gt;
[8] C. G. Plaxton, R. Rajaraman, A. W. Richa, and A. W. Richa. Accessing nearby copies of replicated objects in a distributed environment. pages 311–320, 1997.&lt;br /&gt;
&lt;br /&gt;
[9] M. Satyanarayanan. A survey of distributed ﬁle systems. Annual Review of Computer Science, 4(1):73–104, 1990.&lt;br /&gt;
&lt;br /&gt;
[10] M. Satyanarayanan, J. Kistler, P. Kumar, M. Okasaki, E. Siegel, and D. Steere. Coda: a highly available file system for a distributed workstation environment. Computers, IEEE Transactions on, 39(4):447–459, Apr. 1990.&lt;br /&gt;
&lt;br /&gt;
[11] S. Weil, S. Brandt, E. Miller, D. Long, and C. Maltzahn. Ceph: A scalable, high-performance distributed file system. In Proceedings of the 7th symposium on Operating systems design and implementation, pages 307–320. USENIX Association, 2006.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7465</id>
		<title>DistOS-2011W Naming and Locating Objects in Distributed Systems</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7465"/>
		<updated>2011-02-25T18:26:19Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;David Barrera&lt;br /&gt;
dbarrera@ccsl.carleton.ca&lt;br /&gt;
=Abstract=&lt;br /&gt;
This paper is a survey of existing approaches to naming and locating&lt;br /&gt;
resources in distributed file systems. We survey proposals from the past 20&lt;br /&gt;
years and find that while there have been many improvements in the hardware&lt;br /&gt;
that powers distributed file systems, there are only a few well known&lt;br /&gt;
proposals for dealing with resource location an naming.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
The ability to name resources is important in any file system. Mapping machine&lt;br /&gt;
readable names to human readable names allows users to forget about the way&lt;br /&gt;
the operating system (OS) is handling file access, and focus on completing&lt;br /&gt;
desired tasks. &lt;br /&gt;
&lt;br /&gt;
In traditional file systems, users are mostly responsible for creating&lt;br /&gt;
meaningful file hierarchies for storing and later searching for files. Users &lt;br /&gt;
must be aware of file system restrictions (e.g., file name length, file size,&lt;br /&gt;
etc.). The&lt;br /&gt;
underlying file system is only in charge of moving data to or from physical&lt;br /&gt;
storage media. Distributed file systems offer a series of advantages to users&lt;br /&gt;
(e.g., increased storage space and data reliability), but must be designed&lt;br /&gt;
such that end-users are not aware of all the logic and processing ocurring in&lt;br /&gt;
the background. Indeed, a distributed file system loses its appeal if the user&lt;br /&gt;
is required to do all the heavy lifting. &lt;br /&gt;
&lt;br /&gt;
Take for example, an end-user wanting to access a PDF document. In a local&lt;br /&gt;
file system, the user must only locate the PDF file in the file hierarchy, and&lt;br /&gt;
retrieve from disk. In a distributed file system, the PDF file might be&lt;br /&gt;
stored on a remote server, or perhaps stored multiple times on multiple&lt;br /&gt;
servers. The problem then becomes how to enable end-users to locate the&lt;br /&gt;
correct copy of a file amongst a large volume of shared data. &lt;br /&gt;
&lt;br /&gt;
This paper focuses on two important aspects of distributed file systems: (1)&lt;br /&gt;
how files are named or identified uniquely; and (2) how files are found by&lt;br /&gt;
clients or metadata servers once they are stored in the network. We survey&lt;br /&gt;
distributed file systems and file system designs from as early as 1989 and as&lt;br /&gt;
recently as 2006. We find that there are a relatively small number of ways a&lt;br /&gt;
distributed file system can approach the problem of naming and locating files,&lt;br /&gt;
and the selected approach is always dependent on the requirements of the&lt;br /&gt;
system. &lt;br /&gt;
&lt;br /&gt;
=Naming Resources=&lt;br /&gt;
On non-distributed systems (e.g., a stand-alone desktop computer), file systems&lt;br /&gt;
use&lt;br /&gt;
an object&#039;s absolute path as a unique identifier for that object in the file&lt;br /&gt;
system. This usually translates to meaning that there can&#039;t be two objects with&lt;br /&gt;
the same name in the same location (e.g., a directory&lt;br /&gt;
like =/home/dbarrera/files/=  can&#039;t contain two files called&lt;br /&gt;
=file1= ). In distributed file systems, there is an obvious need for&lt;br /&gt;
allowing multiple files with the same human-readable name, and perhaps even the&lt;br /&gt;
same absolute path (although relative to a particular client) as other clients&lt;br /&gt;
sharing storage on the system. This section reviews&lt;br /&gt;
methods used by existing distributed file systems to handle object naming at a&lt;br /&gt;
massive (sometimes global) scale. &lt;br /&gt;
&lt;br /&gt;
Depending on the requirements of the file system (maximum number of clients,&lt;br /&gt;
concurrent read/writes, etc.), different approaches to naming might be taken.&lt;br /&gt;
Some file systems such as Coda [10], aim to mimic the UNIX-like file&lt;br /&gt;
naming. Others systems have relaxed POSIX-like behaviour to allow for&lt;br /&gt;
better scalability and speed. &lt;br /&gt;
&lt;br /&gt;
==Local Naming==&lt;br /&gt;
The Sun Network File System (NFS) specifies that each client sees a UNIX file&lt;br /&gt;
namespace with a private root. Due to each client being free to manage&lt;br /&gt;
its own namespace, several workstations mounting the same remote directory&lt;br /&gt;
might not have the same view of the files contained in that directory. However,&lt;br /&gt;
if file-sharing or location transparency is required, it can be achieved by&lt;br /&gt;
convention (e.g., users agreeing on calling a file a specific name) rather than&lt;br /&gt;
by design. &lt;br /&gt;
&lt;br /&gt;
One of the first distributed file systems, the Apollo DOMAIN File System&lt;br /&gt;
[6] uses 64-bit unique identifiers (UIDs) for every object in the&lt;br /&gt;
system. Each Apollo client also has a UID created the time of its manufacture.&lt;br /&gt;
When a new file is created, the UID for that file is derived from the time and&lt;br /&gt;
UID of the file&#039;s workstation (this guarantees uniqueness of UIDs per fil&lt;br /&gt;
e without a&lt;br /&gt;
central server assigning them). &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses an internal 96-bit identifier for&lt;br /&gt;
uniquely identifying files. These identifiers are used in the background to&lt;br /&gt;
refer to files, but are never shown to users. Andrew clients see a partitioned&lt;br /&gt;
namespace comprised of a local and shared namespace. The shared namespace is&lt;br /&gt;
identical on all workstations, managed by a central server which can be&lt;br /&gt;
replicated. The local namespace is typically only used for files required to&lt;br /&gt;
boot an Andrew client, and to initialize the distributed client operation. &lt;br /&gt;
&lt;br /&gt;
==Cryptographic Naming==&lt;br /&gt;
OceanStore [5] stores objects at the lowest level by identifying&lt;br /&gt;
them with a&lt;br /&gt;
globally unique identifier (GUID). GUIDs are convenient in distributed&lt;br /&gt;
systems because they do not require a central authority to give them out. This&lt;br /&gt;
allows any client on the system to autonomously generate a valid GUID&lt;br /&gt;
with low probability of collisions (GUIDs are typically long bit strings e.g.,&lt;br /&gt;
more than 128 bits). At the same time, the benefit of an autonomous,&lt;br /&gt;
de-centralized namespace management allows for malicious clients to hijack&lt;br /&gt;
someone else&#039;s namespace and intentionally create collisions. To address this&lt;br /&gt;
issue, OceanStore uses a technique proposed by Mazieres et al. [7]&lt;br /&gt;
called&lt;br /&gt;
&#039;&#039;self-certifying path names&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
Self-certifying pathnames have all the benefits of public key cryptography&lt;br /&gt;
without the burden of key management, which is known to be difficult,&lt;br /&gt;
especially at a very large scale. One of the design goals of self-certifying&lt;br /&gt;
pathnames is for clients to cryptographically verify the contents of any file&lt;br /&gt;
on the network, without requiring exernal information. The novelty of this&lt;br /&gt;
approach is that file names inherently contain all information necessary to&lt;br /&gt;
communicate with remote servers. Essentially, an object&#039;s GUID is the secure&lt;br /&gt;
hash (SHA-1 or similar) of the object&#039;s owner&#039;s key and some human readable&lt;br /&gt;
name. By embedding a client key into the GUID, servers and other clients can&lt;br /&gt;
verify the identity and ownership of an object without querying a&lt;br /&gt;
third-party server.&lt;br /&gt;
&lt;br /&gt;
Freenet [2] also uses keypair-based naming but in a slightly&lt;br /&gt;
different way than OceanStore. Freenet identifies all files by a binary key&lt;br /&gt;
which is obtained by applying a hash function. There are three types of keys in&lt;br /&gt;
this distributed file system:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Keyword-signed key (KSK)&#039;&#039;&#039; This is the simplest identifier because it&lt;br /&gt;
is derived from an arbitrary text string chosen by the user who is storing the&lt;br /&gt;
file on the network. A user storing a PDF document might use the text string&lt;br /&gt;
&amp;quot;freenet/distributed/file/system&amp;quot; to describe the file. The string is used to&lt;br /&gt;
deterministically generate a private/public keypair. The public part of the key&lt;br /&gt;
is hashed and becomes the file identifier. &lt;br /&gt;
&lt;br /&gt;
We note that files can be recovered by guessing or bruteforcing the text&lt;br /&gt;
string. Also, nothing stops two different users from coming up with the same&lt;br /&gt;
descriptive string, and the second user&#039;s file would be rejected by the system,&lt;br /&gt;
as there would be a collision in the namespace.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Signed-subspace key (SSK)&#039;&#039;&#039; This method enables personal namespaces&lt;br /&gt;
for users. For this to work, users generate a public/private keypair using a&lt;br /&gt;
good random number generator. The user also creates a descriptive text string,&lt;br /&gt;
but in this case, it is XORed with the public key to generate the file key.&lt;br /&gt;
This method allows users to manage their own namespace (i.e., collisions can&lt;br /&gt;
still occur locally if the user picks the same string for two files). Users can&lt;br /&gt;
also&lt;br /&gt;
publish a list of keywords and a public key if they want to make those files&lt;br /&gt;
publicly available. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Content-hash key (CHK)&#039;&#039;&#039; In this method, the file key is derived by&lt;br /&gt;
hashing the contents of file. Files are also encrypted with a random encryption&lt;br /&gt;
key specific to that file. For others to retrieve the file, the owner makes&lt;br /&gt;
available the file hash along with the decryption key.&lt;br /&gt;
&lt;br /&gt;
==Hierarchical naming==&lt;br /&gt;
Cheriton et al. [1] suggest naming objects using a long&lt;br /&gt;
name which includes multiple pieces of information: (1) the resource&#039;s name&lt;br /&gt;
and location on the file server where it resides; (2) the organization where&lt;br /&gt;
that file server is located; and (3) a global administrative domain&lt;br /&gt;
representing all the organizations participating the distributed file system.&lt;br /&gt;
For example a file name of &amp;quot;[edu/standford/server4/bin/listdir&amp;quot;  is split&lt;br /&gt;
into:[edu (Gobal domain), /stanford/server4 (organization domain), and /bin/listdir (directory and file)&lt;br /&gt;
&lt;br /&gt;
This naming scheme gives clients all the necessary information (using only the&lt;br /&gt;
file name) to locate a file in a globally distributed file system. While this&lt;br /&gt;
may seem like a good solution, there a few inherent limitations to the&lt;br /&gt;
proposal.&lt;br /&gt;
&lt;br /&gt;
First, file replication and load balancing can only be done at the lowest level&lt;br /&gt;
(i.e., in the file server selected by the organization hosting the file). This&lt;br /&gt;
can lead to a bottleneck when multiple files in the same organization become&lt;br /&gt;
&amp;quot;hot&amp;quot;. The authors suggest using caching and multicast to improve performance&lt;br /&gt;
and avoid congestion on inter-organization links. Second, it requires all&lt;br /&gt;
organizations participating in the system to agree or regulate the common&lt;br /&gt;
namespace, much like the current Domain Name System (DNS). For this to work&lt;br /&gt;
there must be an organization in which each stakeholder in the system is&lt;br /&gt;
equally represented. While systems like these do exist currently (e.g.,&lt;br /&gt;
ICANN (The Internet Corporation for Assigned Names and Numbers (ICANN)&lt;br /&gt;
is a non-profit organization that represents regional registrars, the Internet&lt;br /&gt;
Engineering Task Force (IETF), Internet users and providers to help keep the&lt;br /&gt;
Internet secure, stable and inter-operable.)), they have large amounts of&lt;br /&gt;
administrative overhead and therefore limit the speed at which changes to&lt;br /&gt;
deployed implementations can take place. &lt;br /&gt;
&lt;br /&gt;
One advantage of the approach of Cheriton et al. is that names and directory&lt;br /&gt;
structures must only be unique within an organization/server. The system as a&lt;br /&gt;
whole does not have to keep track of every organization-level implementation,&lt;br /&gt;
yet different organizations should still be able to exchange data.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
The Google File System (GFS) [3] takes a different approach to&lt;br /&gt;
naming files. GFS assumes that all the clients communicate with a single master&lt;br /&gt;
server, who keeps a table mapping full pathnames to metadata (file locks and&lt;br /&gt;
location). The namespace is therefore centrally managed, and all clients must&lt;br /&gt;
register file operations with the master before they can be performed. While&lt;br /&gt;
this architecture has an obvious central point of failure (which can be&lt;br /&gt;
addressed by replication), it has the advantage of not having to deal with a&lt;br /&gt;
distributed namespace. This central design also has the advantage of improving&lt;br /&gt;
data consistency across multi-level distribution nodes. It also allows data&lt;br /&gt;
to be moved to optimal nodes to increase performance or distribute load. It&#039;s&lt;br /&gt;
worth noting that lookup tables are a fundamentally different way to find&lt;br /&gt;
contents in a directory as compared to UNIX &#039;&#039;inodes&#039;&#039;  and related data&lt;br /&gt;
structures. This approach has inherent limitations such as not being able to&lt;br /&gt;
support symlinks .&lt;br /&gt;
&lt;br /&gt;
Ceph [11] client nodes use near-POSIX file system interfaces which are&lt;br /&gt;
relayed back to a central metadata cluster. The metadata cluster is responsible&lt;br /&gt;
for managing the system-wide namespace, coordinating security and verifying&lt;br /&gt;
consistency. Ceph decouples data from metadata which enables the system to also&lt;br /&gt;
distribute metadata servers themselves. The metadata servers store pointers to&lt;br /&gt;
&amp;quot;object-storage clusters&amp;quot; which hold the actual data portion of the file. The&lt;br /&gt;
metadata servers also handle file read and write operations, which then&lt;br /&gt;
redirect clients to the appropriate object storage cluster or device. &lt;br /&gt;
&lt;br /&gt;
=Locating Resources=&lt;br /&gt;
&lt;br /&gt;
==Local File Systems==&lt;br /&gt;
In some distributed systems, files are copied locally and replicated to remote&lt;br /&gt;
servers in the background. NFS [9] is one example where clients&lt;br /&gt;
mount the remote file system locally. The remote directory structure is mapped&lt;br /&gt;
on to a local namespace which makes files transparently accessible to&lt;br /&gt;
clients. In this scheme, there is no need for distributing indexes or metadata,&lt;br /&gt;
since all files appear to be local. A client can find files on the&lt;br /&gt;
&amp;quot;distributed&amp;quot; file system in the same way local files are found.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
File systems  that use lookup tables for storing the&lt;br /&gt;
location and&lt;br /&gt;
metadatada of files (e.g., [3,11]) can locate resources trivially&lt;br /&gt;
by&lt;br /&gt;
querying the lookup table. The table usually contains a pointer to either the&lt;br /&gt;
file itself or a server hosting that file who can in turn handle the file&lt;br /&gt;
operation request. &lt;br /&gt;
&lt;br /&gt;
A very basic implementation of a metadata lookup is used in the Apollo Domain&lt;br /&gt;
File System [6]. A central name server maps client-readable strings&lt;br /&gt;
(e.g., &amp;quot;/home/dbarrera/file1&amp;quot; ) to UIDs. The name server can be&lt;br /&gt;
distributed by replicating it a multiple locations, allowing clients to query&lt;br /&gt;
the nearest server instead of a central one. &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses unique file identifiers to &lt;br /&gt;
populate a &#039;&#039;location database&#039;&#039;  on the central server which maps file&lt;br /&gt;
identifiers to locations. The server is therefore responsible for forwarding&lt;br /&gt;
file access requests to the correct client hosting that file.&lt;br /&gt;
&lt;br /&gt;
==Distributed Index Search==&lt;br /&gt;
Systems like Freenet [2] by design want to make it difficult for&lt;br /&gt;
unauthorized users to access restricted files. This is a difficult problem,&lt;br /&gt;
since the system aims to be highly distributed, but at the same time provide&lt;br /&gt;
guarantees that files won&#039;t be read or modified by unauthorized third-parties.&lt;br /&gt;
However, Freenet has developed an interesting approach to locating files: when&lt;br /&gt;
a file is requested from the network, a user must first obtain or calculate the&lt;br /&gt;
file key. The user&#039;s node requests that file&lt;br /&gt;
from neighboring nodes, who in turn check if the file is stored locally, and if&lt;br /&gt;
not forward the request to the next nearest neighbor. If a node cannot forward&lt;br /&gt;
a request any longer (because a loop would be created or all nodes have&lt;br /&gt;
already been queried), then a failure message is transmitted back to the&lt;br /&gt;
previous node. If a file is found at some point along the request path,&lt;br /&gt;
then the file is sent back through all the intermediate nodes until it reaches&lt;br /&gt;
the request originator, which allows these intermediate nodes to keep a copy of&lt;br /&gt;
the file as a cache. The next time that file key is requested, a node which is&lt;br /&gt;
closer might have it, which will increase the retrieval speed. Nodes&lt;br /&gt;
&amp;quot;forget&amp;quot; about cached copies of files in a least recently used (LRU) manner,&lt;br /&gt;
allowing the network to automatically  balance load and use available space&lt;br /&gt;
optimally. &lt;br /&gt;
&lt;br /&gt;
Distributing a file index was proposed Plaxton et al. [8] as well.&lt;br /&gt;
Their proposal however attempts have all nodes in the network maintain a&lt;br /&gt;
&#039;&#039;virtual tree&#039;&#039; . The tree information is distributed such that each node&lt;br /&gt;
knows about copies of files residing on itself and all nodes that form the&lt;br /&gt;
subtree rooted at that node. All nodes are constantly being updated with&lt;br /&gt;
neighbor information, meaning that new nodes slowly obtain tree information to&lt;br /&gt;
become the roots of their subtrees. This method has the advantage of&lt;br /&gt;
distributing load and providing a hierarchical search functionality that can&lt;br /&gt;
use well known algorithms (BFS, DFS) to find resources on a network.&lt;br /&gt;
&lt;br /&gt;
==Pseudo-random Data Distribution==&lt;br /&gt;
Ceph [11] distributes data through a method that maximizes bandwidth and&lt;br /&gt;
efficiently uses storage resources. Ceph also avoids data imbalance (e.g.,&lt;br /&gt;
new devices are under-used) and load-asymmetries (e.g., often requested data&lt;br /&gt;
placed on only new devices) with a globally known algorithm called CRUSH&lt;br /&gt;
(Controlled Replication Under Scalable Hashing). By using a predefined number&lt;br /&gt;
of &#039;&#039;placement groups&#039;&#039;  (the smallest unit of object storage groups), the&lt;br /&gt;
CRUSH algorithm stores and replicates data across the network in a&lt;br /&gt;
pseudo-random way. This algorithm tells the metadata servers both where the&lt;br /&gt;
data should be stored and where it can be found later, which helps clients and&lt;br /&gt;
metadata servers in locating resources. &lt;br /&gt;
&lt;br /&gt;
=Conclusions=&lt;br /&gt;
This paper has presented a brief survey of distributed file system research&lt;br /&gt;
conducted over the past 20 years. A wide range of distributed file systems have&lt;br /&gt;
been designed to have varying levels of scalability, usability and efficiency.&lt;br /&gt;
Depending on the requirements of a distributed file system, different approaches&lt;br /&gt;
may be taken to address two main concerns: file naming and file retrieval.&lt;br /&gt;
Unfortunately there is no clear winner in either of these categories, which&lt;br /&gt;
means that selecting the &amp;quot;right&amp;quot; method for a given file system will always&lt;br /&gt;
depend on the requirements and users of that system.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] D. R. Cheriton and T. P. Mann. Decentralizing a global naming service for improved performance and fault tolerance. ACM Transactions on Computer Systems, 7:147–183, 1989.&lt;br /&gt;
&lt;br /&gt;
[2] I. Clarke, O. Sandberg, B. Wiley, and T. Hong. Freenet: A distributed anonymous information storage and retrieval system. In Designing Privacy Enhancing Technologies, pages 46–66. Springer, 2001.&lt;br /&gt;
&lt;br /&gt;
[3] S. Ghemawat, H. Gobioﬀ, and S. Leung. The Google ﬁle system. ACM SIGOPS Operating Systems Review, 37(5):29–43, 2003.&lt;br /&gt;
&lt;br /&gt;
[4] J. Howard and C.-M. U. I. T. Center. An overview of the Andrew ﬁle system. Citeseer, 1988.&lt;br /&gt;
&lt;br /&gt;
[5] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weatherspoon, C. Wells, et al. Oceanstore: An architecture for global-scale persistent storage. ACM SIGARCH Computer Architecture News, 28(5):190–201, 2000.&lt;br /&gt;
&lt;br /&gt;
[6] P. Levine. The Apollo DOMAIN Distributed File System. NATO ASI Series: Theory and Practice of Distributed Operating Systems, Y. Paker, JP. Banatre, M. Bozyi git, pages 241–260.&lt;br /&gt;
&lt;br /&gt;
[7] D. Mazieres, M. Kaminsky, M. Kaashoek, and E. Witchel. Separating key management from ﬁle system security. ACM SIGOPS Operating Systems Review, 33(5):124–139, 1999.&lt;br /&gt;
&lt;br /&gt;
[8] C. G. Plaxton, R. Rajaraman, A. W. Richa, and A. W. Richa. Accessing nearby copies of replicated objects in a distributed environment. pages 311–320, 1997.&lt;br /&gt;
&lt;br /&gt;
[9] M. Satyanarayanan. A survey of distributed ﬁle systems. Annual Review of Computer Science, 4(1):73–104, 1990.&lt;br /&gt;
&lt;br /&gt;
[10] M. Satyanarayanan, J. Kistler, P. Kumar, M. Okasaki, E. Siegel, and D. Steere. Coda: a highly available file system for a distributed workstation environment. Computers, IEEE Transactions on, 39(4):447–459, Apr. 1990.&lt;br /&gt;
&lt;br /&gt;
[11] S. Weil, S. Brandt, E. Miller, D. Long, and C. Maltzahn. Ceph: A scalable, high-performance distributed file system. In Proceedings of the 7th symposium on Operating systems design and implementation, pages 307–320. USENIX Association, 2006.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7464</id>
		<title>DistOS-2011W Naming and Locating Objects in Distributed Systems</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7464"/>
		<updated>2011-02-25T18:22:29Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
This paper is a survey of existing approaches to naming and locating&lt;br /&gt;
resources in distributed file systems. We survey proposals from the past 20&lt;br /&gt;
years and find that while there have been many improvements in the hardware&lt;br /&gt;
that powers distributed file systems, there are only a few well known&lt;br /&gt;
proposals for dealing with resource location an naming.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
The ability to name resources is important in any file system. Mapping machine&lt;br /&gt;
readable names to human readable names allows users to forget about the way&lt;br /&gt;
the operating system (OS) is handling file access, and focus on completing&lt;br /&gt;
desired tasks. &lt;br /&gt;
&lt;br /&gt;
In traditional file systems, users are mostly responsible for creating&lt;br /&gt;
meaningful file hierarchies for storing and later searching for files. Users &lt;br /&gt;
must be aware of file system restrictions (e.g., file name length, file size,&lt;br /&gt;
etc.). The&lt;br /&gt;
underlying file system is only in charge of moving data to or from physical&lt;br /&gt;
storage media. Distributed file systems offer a series of advantages to users&lt;br /&gt;
(e.g., increased storage space and data reliability), but must be designed&lt;br /&gt;
such that end-users are not aware of all the logic and processing ocurring in&lt;br /&gt;
the background. Indeed, a distributed file system loses its appeal if the user&lt;br /&gt;
is required to do all the heavy lifting. &lt;br /&gt;
&lt;br /&gt;
Take for example, an end-user wanting to access a PDF document. In a local&lt;br /&gt;
file system, the user must only locate the PDF file in the file hierarchy, and&lt;br /&gt;
retrieve from disk. In a distributed file system, the PDF file might be&lt;br /&gt;
stored on a remote server, or perhaps stored multiple times on multiple&lt;br /&gt;
servers. The problem then becomes how to enable end-users to locate the&lt;br /&gt;
correct copy of a file amongst a large volume of shared data. &lt;br /&gt;
&lt;br /&gt;
This paper focuses on two important aspects of distributed file systems: (1)&lt;br /&gt;
how files are named or identified uniquely; and (2) how files are found by&lt;br /&gt;
clients or metadata servers once they are stored in the network. We survey&lt;br /&gt;
distributed file systems and file system designs from as early as 1989 and as&lt;br /&gt;
recently as 2006. We find that there are a relatively small number of ways a&lt;br /&gt;
distributed file system can approach the problem of naming and locating files,&lt;br /&gt;
and the selected approach is always dependent on the requirements of the&lt;br /&gt;
system. &lt;br /&gt;
&lt;br /&gt;
=Naming Resources=&lt;br /&gt;
On non-distributed systems (e.g., a stand-alone desktop computer), file systems&lt;br /&gt;
use&lt;br /&gt;
an object&#039;s absolute path as a unique identifier for that object in the file&lt;br /&gt;
system. This usually translates to meaning that there can&#039;t be two objects with&lt;br /&gt;
the same name in the same location (e.g., a directory&lt;br /&gt;
like =/home/dbarrera/files/=  can&#039;t contain two files called&lt;br /&gt;
=file1= ). In distributed file systems, there is an obvious need for&lt;br /&gt;
allowing multiple files with the same human-readable name, and perhaps even the&lt;br /&gt;
same absolute path (although relative to a particular client) as other clients&lt;br /&gt;
sharing storage on the system. This section reviews&lt;br /&gt;
methods used by existing distributed file systems to handle object naming at a&lt;br /&gt;
massive (sometimes global) scale. &lt;br /&gt;
&lt;br /&gt;
Depending on the requirements of the file system (maximum number of clients,&lt;br /&gt;
concurrent read/writes, etc.), different approaches to naming might be taken.&lt;br /&gt;
Some file systems such as Coda [10], aim to mimic the UNIX-like file&lt;br /&gt;
naming. Others systems have relaxed POSIX-like behaviour to allow for&lt;br /&gt;
better scalability and speed. &lt;br /&gt;
&lt;br /&gt;
==Local Naming==&lt;br /&gt;
The Sun Network File System (NFS) specifies that each client sees a UNIX file&lt;br /&gt;
namespace with a private root. Due to each client being free to manage&lt;br /&gt;
its own namespace, several workstations mounting the same remote directory&lt;br /&gt;
might not have the same view of the files contained in that directory. However,&lt;br /&gt;
if file-sharing or location transparency is required, it can be achieved by&lt;br /&gt;
convention (e.g., users agreeing on calling a file a specific name) rather than&lt;br /&gt;
by design. &lt;br /&gt;
&lt;br /&gt;
One of the first distributed file systems, the Apollo DOMAIN File System&lt;br /&gt;
[6] uses 64-bit unique identifiers (UIDs) for every object in the&lt;br /&gt;
system. Each Apollo client also has a UID created the time of its manufacture.&lt;br /&gt;
When a new file is created, the UID for that file is derived from the time and&lt;br /&gt;
UID of the file&#039;s workstation (this guarantees uniqueness of UIDs per fil&lt;br /&gt;
e without a&lt;br /&gt;
central server assigning them). &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses an internal 96-bit identifier for&lt;br /&gt;
uniquely identifying files. These identifiers are used in the background to&lt;br /&gt;
refer to files, but are never shown to users. Andrew clients see a partitioned&lt;br /&gt;
namespace comprised of a local and shared namespace. The shared namespace is&lt;br /&gt;
identical on all workstations, managed by a central server which can be&lt;br /&gt;
replicated. The local namespace is typically only used for files required to&lt;br /&gt;
boot an Andrew client, and to initialize the distributed client operation. &lt;br /&gt;
&lt;br /&gt;
==Cryptographic Naming==&lt;br /&gt;
OceanStore [5] stores objects at the lowest level by identifying&lt;br /&gt;
them with a&lt;br /&gt;
globally unique identifier (GUID). GUIDs are convenient in distributed&lt;br /&gt;
systems because they do not require a central authority to give them out. This&lt;br /&gt;
allows any client on the system to autonomously generate a valid GUID&lt;br /&gt;
with low probability of collisions (GUIDs are typically long bit strings e.g.,&lt;br /&gt;
more than 128 bits). At the same time, the benefit of an autonomous,&lt;br /&gt;
de-centralized namespace management allows for malicious clients to hijack&lt;br /&gt;
someone else&#039;s namespace and intentionally create collisions. To address this&lt;br /&gt;
issue, OceanStore uses a technique proposed by Mazieres et al. [7]&lt;br /&gt;
called&lt;br /&gt;
&#039;&#039;self-certifying path names&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
Self-certifying pathnames have all the benefits of public key cryptography&lt;br /&gt;
without the burden of key management, which is known to be difficult,&lt;br /&gt;
especially at a very large scale. One of the design goals of self-certifying&lt;br /&gt;
pathnames is for clients to cryptographically verify the contents of any file&lt;br /&gt;
on the network, without requiring exernal information. The novelty of this&lt;br /&gt;
approach is that file names inherently contain all information necessary to&lt;br /&gt;
communicate with remote servers. Essentially, an object&#039;s GUID is the secure&lt;br /&gt;
hash (SHA-1 or similar) of the object&#039;s owner&#039;s key and some human readable&lt;br /&gt;
name. By embedding a client key into the GUID, servers and other clients can&lt;br /&gt;
verify the identity and ownership of an object without querying a&lt;br /&gt;
third-party server.&lt;br /&gt;
&lt;br /&gt;
Freenet [2] also uses keypair-based naming but in a slightly&lt;br /&gt;
different way than OceanStore. Freenet identifies all files by a binary key&lt;br /&gt;
which is obtained by applying a hash function. There are three types of keys in&lt;br /&gt;
this distributed file system:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Keyword-signed key (KSK)&#039;&#039;&#039; This is the simplest identifier because it&lt;br /&gt;
is derived from an arbitrary text string chosen by the user who is storing the&lt;br /&gt;
file on the network. A user storing a PDF document might use the text string&lt;br /&gt;
&amp;quot;freenet/distributed/file/system&amp;quot; to describe the file. The string is used to&lt;br /&gt;
deterministically generate a private/public keypair. The public part of the key&lt;br /&gt;
is hashed and becomes the file identifier. &lt;br /&gt;
&lt;br /&gt;
We note that files can be recovered by guessing or bruteforcing the text&lt;br /&gt;
string. Also, nothing stops two different users from coming up with the same&lt;br /&gt;
descriptive string, and the second user&#039;s file would be rejected by the system,&lt;br /&gt;
as there would be a collision in the namespace.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Signed-subspace key (SSK)&#039;&#039;&#039; This method enables personal namespaces&lt;br /&gt;
for users. For this to work, users generate a public/private keypair using a&lt;br /&gt;
good random number generator. The user also creates a descriptive text string,&lt;br /&gt;
but in this case, it is XORed with the public key to generate the file key.&lt;br /&gt;
This method allows users to manage their own namespace (i.e., collisions can&lt;br /&gt;
still occur locally if the user picks the same string for two files). Users can&lt;br /&gt;
also&lt;br /&gt;
publish a list of keywords and a public key if they want to make those files&lt;br /&gt;
publicly available. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Content-hash key (CHK)&#039;&#039;&#039; In this method, the file key is derived by&lt;br /&gt;
hashing the contents of file. Files are also encrypted with a random encryption&lt;br /&gt;
key specific to that file. For others to retrieve the file, the owner makes&lt;br /&gt;
available the file hash along with the decryption key.&lt;br /&gt;
&lt;br /&gt;
==Hierarchical naming==&lt;br /&gt;
Cheriton et al. [1] suggest naming objects using a long&lt;br /&gt;
name which includes multiple pieces of information: (1) the resource&#039;s name&lt;br /&gt;
and location on the file server where it resides; (2) the organization where&lt;br /&gt;
that file server is located; and (3) a global administrative domain&lt;br /&gt;
representing all the organizations participating the distributed file system.&lt;br /&gt;
For example a file name of &amp;quot;[edu/standford/server4/bin/listdir&amp;quot;  is split&lt;br /&gt;
into:[edu (Gobal domain), /stanford/server4 (organization domain), and /bin/listdir (directory and file)&lt;br /&gt;
&lt;br /&gt;
This naming scheme gives clients all the necessary information (using only the&lt;br /&gt;
file name) to locate a file in a globally distributed file system. While this&lt;br /&gt;
may seem like a good solution, there a few inherent limitations to the&lt;br /&gt;
proposal.&lt;br /&gt;
&lt;br /&gt;
First, file replication and load balancing can only be done at the lowest level&lt;br /&gt;
(i.e., in the file server selected by the organization hosting the file). This&lt;br /&gt;
can lead to a bottleneck when multiple files in the same organization become&lt;br /&gt;
&amp;quot;hot&amp;quot;. The authors suggest using caching and multicast to improve performance&lt;br /&gt;
and avoid congestion on inter-organization links. Second, it requires all&lt;br /&gt;
organizations participating in the system to agree or regulate the common&lt;br /&gt;
namespace, much like the current Domain Name System (DNS). For this to work&lt;br /&gt;
there must be an organization in which each stakeholder in the system is&lt;br /&gt;
equally represented. While systems like these do exist currently (e.g.,&lt;br /&gt;
ICANN (The Internet Corporation for Assigned Names and Numbers (ICANN)&lt;br /&gt;
is a non-profit organization that represents regional registrars, the Internet&lt;br /&gt;
Engineering Task Force (IETF), Internet users and providers to help keep the&lt;br /&gt;
Internet secure, stable and inter-operable.)), they have large amounts of&lt;br /&gt;
administrative overhead and therefore limit the speed at which changes to&lt;br /&gt;
deployed implementations can take place. &lt;br /&gt;
&lt;br /&gt;
One advantage of the approach of Cheriton et al. is that names and directory&lt;br /&gt;
structures must only be unique within an organization/server. The system as a&lt;br /&gt;
whole does not have to keep track of every organization-level implementation,&lt;br /&gt;
yet different organizations should still be able to exchange data.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
The Google File System (GFS) [3] takes a different approach to&lt;br /&gt;
naming files. GFS assumes that all the clients communicate with a single master&lt;br /&gt;
server, who keeps a table mapping full pathnames to metadata (file locks and&lt;br /&gt;
location). The namespace is therefore centrally managed, and all clients must&lt;br /&gt;
register file operations with the master before they can be performed. While&lt;br /&gt;
this architecture has an obvious central point of failure (which can be&lt;br /&gt;
addressed by replication), it has the advantage of not having to deal with a&lt;br /&gt;
distributed namespace. This central design also has the advantage of improving&lt;br /&gt;
data consistency across multi-level distribution nodes. It also allows data&lt;br /&gt;
to be moved to optimal nodes to increase performance or distribute load. It&#039;s&lt;br /&gt;
worth noting that lookup tables are a fundamentally different way to find&lt;br /&gt;
contents in a directory as compared to UNIX &#039;&#039;inodes&#039;&#039;  and related data&lt;br /&gt;
structures. This approach has inherent limitations such as not being able to&lt;br /&gt;
support symlinks .&lt;br /&gt;
&lt;br /&gt;
Ceph [11] client nodes use near-POSIX file system interfaces which are&lt;br /&gt;
relayed back to a central metadata cluster. The metadata cluster is responsible&lt;br /&gt;
for managing the system-wide namespace, coordinating security and verifying&lt;br /&gt;
consistency. Ceph decouples data from metadata which enables the system to also&lt;br /&gt;
distribute metadata servers themselves. The metadata servers store pointers to&lt;br /&gt;
&amp;quot;object-storage clusters&amp;quot; which hold the actual data portion of the file. The&lt;br /&gt;
metadata servers also handle file read and write operations, which then&lt;br /&gt;
redirect clients to the appropriate object storage cluster or device. &lt;br /&gt;
&lt;br /&gt;
=Locating Resources=&lt;br /&gt;
&lt;br /&gt;
==Local File Systems==&lt;br /&gt;
In some distributed systems, files are copied locally and replicated to remote&lt;br /&gt;
servers in the background. NFS [9] is one example where clients&lt;br /&gt;
mount the remote file system locally. The remote directory structure is mapped&lt;br /&gt;
on to a local namespace which makes files transparently accessible to&lt;br /&gt;
clients. In this scheme, there is no need for distributing indexes or metadata,&lt;br /&gt;
since all files appear to be local. A client can find files on the&lt;br /&gt;
&amp;quot;distributed&amp;quot; file system in the same way local files are found.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
File systems  that use lookup tables for storing the&lt;br /&gt;
location and&lt;br /&gt;
metadatada of files (e.g., [3,11]) can locate resources trivially&lt;br /&gt;
by&lt;br /&gt;
querying the lookup table. The table usually contains a pointer to either the&lt;br /&gt;
file itself or a server hosting that file who can in turn handle the file&lt;br /&gt;
operation request. &lt;br /&gt;
&lt;br /&gt;
A very basic implementation of a metadata lookup is used in the Apollo Domain&lt;br /&gt;
File System [6]. A central name server maps client-readable strings&lt;br /&gt;
(e.g., &amp;quot;/home/dbarrera/file1&amp;quot; ) to UIDs. The name server can be&lt;br /&gt;
distributed by replicating it a multiple locations, allowing clients to query&lt;br /&gt;
the nearest server instead of a central one. &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses unique file identifiers to &lt;br /&gt;
populate a &#039;&#039;location database&#039;&#039;  on the central server which maps file&lt;br /&gt;
identifiers to locations. The server is therefore responsible for forwarding&lt;br /&gt;
file access requests to the correct client hosting that file.&lt;br /&gt;
&lt;br /&gt;
==Distributed Index Search==&lt;br /&gt;
Systems like Freenet [2] by design want to make it difficult for&lt;br /&gt;
unauthorized users to access restricted files. This is a difficult problem,&lt;br /&gt;
since the system aims to be highly distributed, but at the same time provide&lt;br /&gt;
guarantees that files won&#039;t be read or modified by unauthorized third-parties.&lt;br /&gt;
However, Freenet has developed an interesting approach to locating files: when&lt;br /&gt;
a file is requested from the network, a user must first obtain or calculate the&lt;br /&gt;
file key. The user&#039;s node requests that file&lt;br /&gt;
from neighboring nodes, who in turn check if the file is stored locally, and if&lt;br /&gt;
not forward the request to the next nearest neighbor. If a node cannot forward&lt;br /&gt;
a request any longer (because a loop would be created or all nodes have&lt;br /&gt;
already been queried), then a failure message is transmitted back to the&lt;br /&gt;
previous node. If a file is found at some point along the request path,&lt;br /&gt;
then the file is sent back through all the intermediate nodes until it reaches&lt;br /&gt;
the request originator, which allows these intermediate nodes to keep a copy of&lt;br /&gt;
the file as a cache. The next time that file key is requested, a node which is&lt;br /&gt;
closer might have it, which will increase the retrieval speed. Nodes&lt;br /&gt;
&amp;quot;forget&amp;quot; about cached copies of files in a least recently used (LRU) manner,&lt;br /&gt;
allowing the network to automatically  balance load and use available space&lt;br /&gt;
optimally. &lt;br /&gt;
&lt;br /&gt;
Distributing a file index was proposed Plaxton et al. [8] as well.&lt;br /&gt;
Their proposal however attempts have all nodes in the network maintain a&lt;br /&gt;
&#039;&#039;virtual tree&#039;&#039; . The tree information is distributed such that each node&lt;br /&gt;
knows about copies of files residing on itself and all nodes that form the&lt;br /&gt;
subtree rooted at that node. All nodes are constantly being updated with&lt;br /&gt;
neighbor information, meaning that new nodes slowly obtain tree information to&lt;br /&gt;
become the roots of their subtrees. This method has the advantage of&lt;br /&gt;
distributing load and providing a hierarchical search functionality that can&lt;br /&gt;
use well known algorithms (BFS, DFS) to find resources on a network.&lt;br /&gt;
&lt;br /&gt;
==Pseudo-random Data Distribution==&lt;br /&gt;
Ceph [11] distributes data through a method that maximizes bandwidth and&lt;br /&gt;
efficiently uses storage resources. Ceph also avoids data imbalance (e.g.,&lt;br /&gt;
new devices are under-used) and load-asymmetries (e.g., often requested data&lt;br /&gt;
placed on only new devices) with a globally known algorithm called CRUSH&lt;br /&gt;
(Controlled Replication Under Scalable Hashing). By using a predefined number&lt;br /&gt;
of &#039;&#039;placement groups&#039;&#039;  (the smallest unit of object storage groups), the&lt;br /&gt;
CRUSH algorithm stores and replicates data across the network in a&lt;br /&gt;
pseudo-random way. This algorithm tells the metadata servers both where the&lt;br /&gt;
data should be stored and where it can be found later, which helps clients and&lt;br /&gt;
metadata servers in locating resources. &lt;br /&gt;
&lt;br /&gt;
=Conclusions=&lt;br /&gt;
This paper has presented a brief survey of distributed file system research&lt;br /&gt;
conducted over the past 20 years. A wide range of distributed file systems have&lt;br /&gt;
been designed to have varying levels of scalability, usability and efficiency.&lt;br /&gt;
Depending on the requirements of a distributed file system, different approaches&lt;br /&gt;
may be taken to address two main concerns: file naming and file retrieval.&lt;br /&gt;
Unfortunately there is no clear winner in either of these categories, which&lt;br /&gt;
means that selecting the &amp;quot;right&amp;quot; method for a given file system will always&lt;br /&gt;
depend on the requirements and users of that system.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] D. R. Cheriton and T. P. Mann. Decentralizing a global naming service for improved performance and fault tolerance. ACM Transactions on Computer Systems, 7:147–183, 1989.&lt;br /&gt;
&lt;br /&gt;
[2] I. Clarke, O. Sandberg, B. Wiley, and T. Hong. Freenet: A distributed anonymous information storage and retrieval system. In Designing Privacy Enhancing Technologies, pages 46–66. Springer, 2001.&lt;br /&gt;
&lt;br /&gt;
[3] S. Ghemawat, H. Gobioﬀ, and S. Leung. The Google ﬁle system. ACM SIGOPS Operating Systems Review, 37(5):29–43, 2003.&lt;br /&gt;
&lt;br /&gt;
[4] J. Howard and C.-M. U. I. T. Center. An overview of the Andrew ﬁle system. Citeseer, 1988.&lt;br /&gt;
&lt;br /&gt;
[5] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weatherspoon, C. Wells, et al. Oceanstore: An architecture for global-scale persistent storage. ACM SIGARCH Computer Architecture News, 28(5):190–201, 2000.&lt;br /&gt;
&lt;br /&gt;
[6] P. Levine. The Apollo DOMAIN Distributed File System. NATO ASI Series: Theory and Practice of Distributed Operating Systems, Y. Paker, JP. Banatre, M. Bozyi git, pages 241–260.&lt;br /&gt;
&lt;br /&gt;
[7] D. Mazieres, M. Kaminsky, M. Kaashoek, and E. Witchel. Separating key management from ﬁle system security. ACM SIGOPS Operating Systems Review, 33(5):124–139, 1999.&lt;br /&gt;
&lt;br /&gt;
[8] C. G. Plaxton, R. Rajaraman, A. W. Richa, and A. W. Richa. Accessing nearby copies of replicated objects in a distributed environment. pages 311–320, 1997.&lt;br /&gt;
&lt;br /&gt;
[9] M. Satyanarayanan. A survey of distributed ﬁle systems. Annual Review of Computer Science, 4(1):73–104, 1990.&lt;br /&gt;
&lt;br /&gt;
[10] M. Satyanarayanan, J. Kistler, P. Kumar, M. Okasaki, E. Siegel, and D. Steere. Coda: a highly available file system for a distributed workstation environment. Computers, IEEE Transactions on, 39(4):447–459, Apr. 1990.&lt;br /&gt;
&lt;br /&gt;
[11] S. Weil, S. Brandt, E. Miller, D. Long, and C. Maltzahn. Ceph: A scalable, high-performance distributed file system. In Proceedings of the 7th symposium on Operating systems design and implementation, pages 307–320. USENIX Association, 2006.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7463</id>
		<title>DistOS-2011W Naming and Locating Objects in Distributed Systems</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7463"/>
		<updated>2011-02-25T18:21:02Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
This paper is a survey of existing approaches to naming and locating&lt;br /&gt;
resources in distributed file systems. We survey proposals from the past 20&lt;br /&gt;
years and find that while there have been many improvements in the hardware&lt;br /&gt;
that powers distributed file systems, there are only a few well known&lt;br /&gt;
proposals for dealing with resource location an naming.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
The ability to name resources is important in any file system. Mapping machine&lt;br /&gt;
readable names to human readable names allows users to forget about the way&lt;br /&gt;
the operating system (OS) is handling file access, and focus on completing&lt;br /&gt;
desired tasks. &lt;br /&gt;
&lt;br /&gt;
In traditional file systems, users are mostly responsible for creating&lt;br /&gt;
meaningful file hierarchies for storing and later searching for files. Users &lt;br /&gt;
must be aware of file system restrictions (e.g., file name length, file size,&lt;br /&gt;
etc.). The&lt;br /&gt;
underlying file system is only in charge of moving data to or from physical&lt;br /&gt;
storage media. Distributed file systems offer a series of advantages to users&lt;br /&gt;
(e.g., increased storage space and data reliability), but must be designed&lt;br /&gt;
such that end-users are not aware of all the logic and processing ocurring in&lt;br /&gt;
the background. Indeed, a distributed file system loses its appeal if the user&lt;br /&gt;
is required to do all the heavy lifting. &lt;br /&gt;
&lt;br /&gt;
Take for example, an end-user wanting to access a PDF document. In a local&lt;br /&gt;
file system, the user must only locate the PDF file in the file hierarchy, and&lt;br /&gt;
retrieve from disk. In a distributed file system, the PDF file might be&lt;br /&gt;
stored on a remote server, or perhaps stored multiple times on multiple&lt;br /&gt;
servers. The problem then becomes how to enable end-users to locate the&lt;br /&gt;
correct copy of a file amongst a large volume of shared data. &lt;br /&gt;
&lt;br /&gt;
This paper focuses on two important aspects of distributed file systems: (1)&lt;br /&gt;
how files are named or identified uniquely; and (2) how files are found by&lt;br /&gt;
clients or metadata servers once they are stored in the network. We survey&lt;br /&gt;
distributed file systems and file system designs from as early as 1989 and as&lt;br /&gt;
recently as 2006. We find that there are a relatively small number of ways a&lt;br /&gt;
distributed file system can approach the problem of naming and locating files,&lt;br /&gt;
and the selected approach is always dependent on the requirements of the&lt;br /&gt;
system. &lt;br /&gt;
&lt;br /&gt;
=Naming Resources=&lt;br /&gt;
On non-distributed systems (e.g., a stand-alone desktop computer), file systems&lt;br /&gt;
use&lt;br /&gt;
an object&#039;s absolute path as a unique identifier for that object in the file&lt;br /&gt;
system. This usually translates to meaning that there can&#039;t be two objects with&lt;br /&gt;
the same name in the same location (e.g., a directory&lt;br /&gt;
like =/home/dbarrera/files/=  can&#039;t contain two files called&lt;br /&gt;
=file1= ). In distributed file systems, there is an obvious need for&lt;br /&gt;
allowing multiple files with the same human-readable name, and perhaps even the&lt;br /&gt;
same absolute path (although relative to a particular client) as other clients&lt;br /&gt;
sharing storage on the system. This section reviews&lt;br /&gt;
methods used by existing distributed file systems to handle object naming at a&lt;br /&gt;
massive (sometimes global) scale. &lt;br /&gt;
&lt;br /&gt;
Depending on the requirements of the file system (maximum number of clients,&lt;br /&gt;
concurrent read/writes, etc.), different approaches to naming might be taken.&lt;br /&gt;
Some file systems such as Coda [10], aim to mimic the UNIX-like file&lt;br /&gt;
naming. Others systems have relaxed POSIX-like behaviour to allow for&lt;br /&gt;
better scalability and speed. &lt;br /&gt;
&lt;br /&gt;
==Local Naming==&lt;br /&gt;
The Sun Network File System (NFS) specifies that each client sees a UNIX file&lt;br /&gt;
namespace with a private root. Due to each client being free to manage&lt;br /&gt;
its own namespace, several workstations mounting the same remote directory&lt;br /&gt;
might not have the same view of the files contained in that directory. However,&lt;br /&gt;
if file-sharing or location transparency is required, it can be achieved by&lt;br /&gt;
convention (e.g., users agreeing on calling a file a specific name) rather than&lt;br /&gt;
by design. &lt;br /&gt;
&lt;br /&gt;
One of the first distributed file systems, the Apollo DOMAIN File System&lt;br /&gt;
[6] uses 64-bit unique identifiers (UIDs) for every object in the&lt;br /&gt;
system. Each Apollo client also has a UID created the time of its manufacture.&lt;br /&gt;
When a new file is created, the UID for that file is derived from the time and&lt;br /&gt;
UID of the file&#039;s workstation (this guarantees uniqueness of UIDs per fil&lt;br /&gt;
e without a&lt;br /&gt;
central server assigning them). &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses an internal 96-bit identifier for&lt;br /&gt;
uniquely identifying files. These identifiers are used in the background to&lt;br /&gt;
refer to files, but are never shown to users. Andrew clients see a partitioned&lt;br /&gt;
namespace comprised of a local and shared namespace. The shared namespace is&lt;br /&gt;
identical on all workstations, managed by a central server which can be&lt;br /&gt;
replicated. The local namespace is typically only used for files required to&lt;br /&gt;
boot an Andrew client, and to initialize the distributed client operation. &lt;br /&gt;
&lt;br /&gt;
==Cryptographic Naming==&lt;br /&gt;
OceanStore [5] stores objects at the lowest level by identifying&lt;br /&gt;
them with a&lt;br /&gt;
globally unique identifier (GUID). GUIDs are convenient in distributed&lt;br /&gt;
systems because they do not require a central authority to give them out. This&lt;br /&gt;
allows any client on the system to autonomously generate a valid GUID&lt;br /&gt;
with low probability of collisions (GUIDs are typically long bit strings e.g.,&lt;br /&gt;
more than 128 bits). At the same time, the benefit of an autonomous,&lt;br /&gt;
de-centralized namespace management allows for malicious clients to hijack&lt;br /&gt;
someone else&#039;s namespace and intentionally create collisions. To address this&lt;br /&gt;
issue, OceanStore uses a technique proposed by Mazieres et al. [7]&lt;br /&gt;
called&lt;br /&gt;
&#039;&#039;self-certifying path names&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
Self-certifying pathnames have all the benefits of public key cryptography&lt;br /&gt;
without the burden of key management, which is known to be difficult,&lt;br /&gt;
especially at a very large scale. One of the design goals of self-certifying&lt;br /&gt;
pathnames is for clients to cryptographically verify the contents of any file&lt;br /&gt;
on the network, without requiring exernal information. The novelty of this&lt;br /&gt;
approach is that file names inherently contain all information necessary to&lt;br /&gt;
communicate with remote servers. Essentially, an object&#039;s GUID is the secure&lt;br /&gt;
hash (SHA-1 or similar) of the object&#039;s owner&#039;s key and some human readable&lt;br /&gt;
name. By embedding a client key into the GUID, servers and other clients can&lt;br /&gt;
verify the identity and ownership of an object without querying a&lt;br /&gt;
third-party server.&lt;br /&gt;
&lt;br /&gt;
Freenet [2] also uses keypair-based naming but in a slightly&lt;br /&gt;
different way than OceanStore. Freenet identifies all files by a binary key&lt;br /&gt;
which is obtained by applying a hash function. There are three types of keys in&lt;br /&gt;
this distributed file system:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Keyword-signed key (KSK)&#039;&#039;&#039; This is the simplest identifier because it&lt;br /&gt;
is derived from an arbitrary text string chosen by the user who is storing the&lt;br /&gt;
file on the network. A user storing a PDF document might use the text string&lt;br /&gt;
&amp;quot;freenet/distributed/file/system&amp;quot; to describe the file. The string is used to&lt;br /&gt;
deterministically generate a private/public keypair. The public part of the key&lt;br /&gt;
is hashed and becomes the file identifier. &lt;br /&gt;
&lt;br /&gt;
We note that files can be recovered by guessing or bruteforcing the text&lt;br /&gt;
string. Also, nothing stops two different users from coming up with the same&lt;br /&gt;
descriptive string, and the second user&#039;s file would be rejected by the system,&lt;br /&gt;
as there would be a collision in the namespace.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Signed-subspace key (SSK)&#039;&#039;&#039; This method enables personal namespaces&lt;br /&gt;
for users. For this to work, users generate a public/private keypair using a&lt;br /&gt;
good random number generator. The user also creates a descriptive text string,&lt;br /&gt;
but in this case, it is XORed with the public key to generate the file key.&lt;br /&gt;
This method allows users to manage their own namespace (i.e., collisions can&lt;br /&gt;
still occur locally if the user picks the same string for two files). Users can&lt;br /&gt;
also&lt;br /&gt;
publish a list of keywords and a public key if they want to make those files&lt;br /&gt;
publicly available. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Content-hash key (CHK)&#039;&#039;&#039; In this method, the file key is derived by&lt;br /&gt;
hashing the contents of file. Files are also encrypted with a random encryption&lt;br /&gt;
key specific to that file. For others to retrieve the file, the owner makes&lt;br /&gt;
available the file hash along with the decryption key.&lt;br /&gt;
&lt;br /&gt;
==Hierarchical naming==&lt;br /&gt;
Cheriton et al. [1] suggest naming objects using a long&lt;br /&gt;
name which includes multiple pieces of information: (1) the resource&#039;s name&lt;br /&gt;
and location on the file server where it resides; (2) the organization where&lt;br /&gt;
that file server is located; and (3) a global administrative domain&lt;br /&gt;
representing all the organizations participating the distributed file system.&lt;br /&gt;
For example a file name of &amp;quot;[edu/standford/server4/bin/listdir&amp;quot;  is split&lt;br /&gt;
into:[edu (Gobal domain), /stanford/server4 (organization domain), and /bin/listdir (directory and file)&lt;br /&gt;
&lt;br /&gt;
This naming scheme gives clients all the necessary information (using only the&lt;br /&gt;
file name) to locate a file in a globally distributed file system. While this&lt;br /&gt;
may seem like a good solution, there a few inherent limitations to the&lt;br /&gt;
proposal.&lt;br /&gt;
&lt;br /&gt;
First, file replication and load balancing can only be done at the lowest level&lt;br /&gt;
(i.e., in the file server selected by the organization hosting the file). This&lt;br /&gt;
can lead to a bottleneck when multiple files in the same organization become&lt;br /&gt;
&amp;quot;hot&amp;quot;. The authors suggest using caching and multicast to improve performance&lt;br /&gt;
and avoid congestion on inter-organization links. Second, it requires all&lt;br /&gt;
organizations participating in the system to agree or regulate the common&lt;br /&gt;
namespace, much like the current Domain Name System (DNS). For this to work&lt;br /&gt;
there must be an organization in which each stakeholder in the system is&lt;br /&gt;
equally represented. While systems like these do exist currently (e.g.,&lt;br /&gt;
ICANN (The Internet Corporation for Assigned Names and Numbers (ICANN)&lt;br /&gt;
is a non-profit organization that represents regional registrars, the Internet&lt;br /&gt;
Engineering Task Force (IETF), Internet users and providers to help keep the&lt;br /&gt;
Internet secure, stable and inter-operable.)), they have large amounts of&lt;br /&gt;
administrative overhead and therefore limit the speed at which changes to&lt;br /&gt;
deployed implementations can take place. &lt;br /&gt;
&lt;br /&gt;
One advantage of the approach of Cheriton et al. is that names and directory&lt;br /&gt;
structures must only be unique within an organization/server. The system as a&lt;br /&gt;
whole does not have to keep track of every organization-level implementation,&lt;br /&gt;
yet different organizations should still be able to exchange data.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
The Google File System (GFS) [3] takes a different approach to&lt;br /&gt;
naming files. GFS assumes that all the clients communicate with a single master&lt;br /&gt;
server, who keeps a table mapping full pathnames to metadata (file locks and&lt;br /&gt;
location). The namespace is therefore centrally managed, and all clients must&lt;br /&gt;
register file operations with the master before they can be performed. While&lt;br /&gt;
this architecture has an obvious central point of failure (which can be&lt;br /&gt;
addressed by replication), it has the advantage of not having to deal with a&lt;br /&gt;
distributed namespace. This central design also has the advantage of improving&lt;br /&gt;
data consistency across multi-level distribution nodes. It also allows data&lt;br /&gt;
to be moved to optimal nodes to increase performance or distribute load. It&#039;s&lt;br /&gt;
worth noting that lookup tables are a fundamentally different way to find&lt;br /&gt;
contents in a directory as compared to UNIX &#039;&#039;inodes&#039;&#039;  and related data&lt;br /&gt;
structures. This approach has inherent limitations such as not being able to&lt;br /&gt;
support symlinks .&lt;br /&gt;
&lt;br /&gt;
Ceph [11] client nodes use near-POSIX file system interfaces which are&lt;br /&gt;
relayed back to a central metadata cluster. The metadata cluster is responsible&lt;br /&gt;
for managing the system-wide namespace, coordinating security and verifying&lt;br /&gt;
consistency. Ceph decouples data from metadata which enables the system to also&lt;br /&gt;
distribute metadata servers themselves. The metadata servers store pointers to&lt;br /&gt;
&amp;quot;object-storage clusters&amp;quot; which hold the actual data portion of the file. The&lt;br /&gt;
metadata servers also handle file read and write operations, which then&lt;br /&gt;
redirect clients to the appropriate object storage cluster or device. &lt;br /&gt;
&lt;br /&gt;
=Locating Resources=&lt;br /&gt;
&lt;br /&gt;
==Local File Systems==&lt;br /&gt;
In some distributed systems, files are copied locally and replicated to remote&lt;br /&gt;
servers in the background. NFS [9] is one example where clients&lt;br /&gt;
mount the remote file system locally. The remote directory structure is mapped&lt;br /&gt;
on to a local namespace which makes files transparently accessible to&lt;br /&gt;
clients. In this scheme, there is no need for distributing indexes or metadata,&lt;br /&gt;
since all files appear to be local. A client can find files on the&lt;br /&gt;
&amp;quot;distributed&amp;quot; file system in the same way local files are found.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
File systems  that use lookup tables for storing the&lt;br /&gt;
location and&lt;br /&gt;
metadatada of files (e.g., [3,11]) can locate resources trivially&lt;br /&gt;
by&lt;br /&gt;
querying the lookup table. The table usually contains a pointer to either the&lt;br /&gt;
file itself or a server hosting that file who can in turn handle the file&lt;br /&gt;
operation request. &lt;br /&gt;
&lt;br /&gt;
A very basic implementation of a metadata lookup is used in the Apollo Domain&lt;br /&gt;
File System [6]. A central name server maps client-readable strings&lt;br /&gt;
(e.g., &amp;quot;/home/dbarrera/file1&amp;quot; ) to UIDs. The name server can be&lt;br /&gt;
distributed by replicating it a multiple locations, allowing clients to query&lt;br /&gt;
the nearest server instead of a central one. &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses unique file identifiers to &lt;br /&gt;
populate a &#039;&#039;location database&#039;&#039;  on the central server which maps file&lt;br /&gt;
identifiers to locations. The server is therefore responsible for forwarding&lt;br /&gt;
file access requests to the correct client hosting that file.&lt;br /&gt;
&lt;br /&gt;
==Distributed Index Search==&lt;br /&gt;
Systems like Freenet [2] by design want to make it difficult for&lt;br /&gt;
unauthorized users to access restricted files. This is a difficult problem,&lt;br /&gt;
since the system aims to be highly distributed, but at the same time provide&lt;br /&gt;
guarantees that files won&#039;t be read or modified by unauthorized third-parties.&lt;br /&gt;
However, Freenet has developed an interesting approach to locating files: when&lt;br /&gt;
a file is requested from the network, a user must first obtain or calculate the&lt;br /&gt;
file key. The user&#039;s node requests that file&lt;br /&gt;
from neighboring nodes, who in turn check if the file is stored locally, and if&lt;br /&gt;
not forward the request to the next nearest neighbor. If a node cannot forward&lt;br /&gt;
a request any longer (because a loop would be created or all nodes have&lt;br /&gt;
already been queried), then a failure message is transmitted back to the&lt;br /&gt;
previous node. If a file is found at some point along the request path,&lt;br /&gt;
then the file is sent back through all the intermediate nodes until it reaches&lt;br /&gt;
the request originator, which allows these intermediate nodes to keep a copy of&lt;br /&gt;
the file as a cache. The next time that file key is requested, a node which is&lt;br /&gt;
closer might have it, which will increase the retrieval speed. Nodes&lt;br /&gt;
&amp;quot;forget&amp;quot; about cached copies of files in a least recently used (LRU) manner,&lt;br /&gt;
allowing the network to automatically  balance load and use available space&lt;br /&gt;
optimally. &lt;br /&gt;
&lt;br /&gt;
Distributing a file index was proposed Plaxton et al. [8] as well.&lt;br /&gt;
Their proposal however attempts have all nodes in the network maintain a&lt;br /&gt;
&#039;&#039;virtual tree&#039;&#039; . The tree information is distributed such that each node&lt;br /&gt;
knows about copies of files residing on itself and all nodes that form the&lt;br /&gt;
subtree rooted at that node. All nodes are constantly being updated with&lt;br /&gt;
neighbor information, meaning that new nodes slowly obtain tree information to&lt;br /&gt;
become the roots of their subtrees. This method has the advantage of&lt;br /&gt;
distributing load and providing a hierarchical search functionality that can&lt;br /&gt;
use well known algorithms (BFS, DFS) to find resources on a network.&lt;br /&gt;
&lt;br /&gt;
==Pseudo-random Data Distribution==&lt;br /&gt;
Ceph [11] distributes data through a method that maximizes bandwidth and&lt;br /&gt;
efficiently uses storage resources. Ceph also avoids data imbalance (e.g.,&lt;br /&gt;
new devices are under-used) and load-asymmetries (e.g., often requested data&lt;br /&gt;
placed on only new devices) with a globally known algorithm called CRUSH&lt;br /&gt;
(Controlled Replication Under Scalable Hashing). By using a predefined number&lt;br /&gt;
of &#039;&#039;placement groups&#039;&#039;  (the smallest unit of object storage groups), the&lt;br /&gt;
CRUSH algorithm stores and replicates data across the network in a&lt;br /&gt;
pseudo-random way. This algorithm tells the metadata servers both where the&lt;br /&gt;
data should be stored and where it can be found later, which helps clients and&lt;br /&gt;
metadata servers in locating resources. &lt;br /&gt;
&lt;br /&gt;
=Conclusions=&lt;br /&gt;
This paper has presented a brief survey of distributed file system research&lt;br /&gt;
conducted over the past 20 years. A wide range of distributed file systems have&lt;br /&gt;
been designed to have varying levels of scalability, usability and efficiency.&lt;br /&gt;
Depending on the requirements of a distributed file system, different approaches&lt;br /&gt;
may be taken to address two main concerns: file naming and file retrieval.&lt;br /&gt;
Unfortunately there is no clear winner in either of these categories, which&lt;br /&gt;
means that selecting the &amp;quot;right&amp;quot; method for a given file system will always&lt;br /&gt;
depend on the requirements and users of that system.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] D. R. Cheriton and T. P. Mann. Decentralizing a global naming service for improved performance and fault tolerance. ACM Transactions on Computer Systems, 7:147–183, 1989.&lt;br /&gt;
[2] I. Clarke, O. Sandberg, B. Wiley, and T. Hong. Freenet: A distributed anonymous information storage and retrieval system. In Designing Privacy Enhancing Technologies, pages 46–66. Springer, 2001.&lt;br /&gt;
[3] S. Ghemawat, H. Gobioﬀ, and S. Leung. The Google ﬁle system. ACM SIGOPS Operating Systems&lt;br /&gt;
    Review, 37(5):29–43, 2003.&lt;br /&gt;
[4] J. Howard and C.-M. U. I. T. Center. An overview of the Andrew ﬁle system. Citeseer, 1988.&lt;br /&gt;
[5] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weath-&lt;br /&gt;
    erspoon, C. Wells, et al. Oceanstore: An architecture for global-scale persistent storage. ACM SIGARCH&lt;br /&gt;
    Computer Architecture News, 28(5):190–201, 2000.&lt;br /&gt;
[6] P. Levine. The Apollo DOMAIN Distributed File System. NATO ASI Series: Theory and Practice of&lt;br /&gt;
    Distributed Operating Systems, Y. Paker, JP. Banatre, M. Bozyi git, pages 241–260.&lt;br /&gt;
[7] D. Mazieres, M. Kaminsky, M. Kaashoek, and E. Witchel. Separating key management from ﬁle system&lt;br /&gt;
    security. ACM SIGOPS Operating Systems Review, 33(5):124–139, 1999.&lt;br /&gt;
[8] C. G. Plaxton, R. Rajaraman, A. W. Richa, and A. W. Richa. Accessing nearby copies of replicated&lt;br /&gt;
    objects in a distributed environment. pages 311–320, 1997.&lt;br /&gt;
[9] M. Satyanarayanan. A survey of distributed ﬁle systems. Annual Review of Computer Science, 4(1):73–&lt;br /&gt;
    104, 1990.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7462</id>
		<title>DistOS-2011W Naming and Locating Objects in Distributed Systems</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7462"/>
		<updated>2011-02-25T18:20:36Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Conclusions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
This paper is a survey of existing approaches to naming and locating&lt;br /&gt;
resources in distributed file systems. We survey proposals from the past 20&lt;br /&gt;
years and find that while there have been many improvements in the hardware&lt;br /&gt;
that powers distributed file systems, there are only a few well known&lt;br /&gt;
proposals for dealing with resource location an naming.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
The ability to name resources is important in any file system. Mapping machine&lt;br /&gt;
readable names to human readable names allows users to forget about the way&lt;br /&gt;
the operating system (OS) is handling file access, and focus on completing&lt;br /&gt;
desired tasks. &lt;br /&gt;
&lt;br /&gt;
In traditional file systems, users are mostly responsible for creating&lt;br /&gt;
meaningful file hierarchies for storing and later searching for files. Users &lt;br /&gt;
must be aware of file system restrictions (e.g., file name length, file size,&lt;br /&gt;
etc.). The&lt;br /&gt;
underlying file system is only in charge of moving data to or from physical&lt;br /&gt;
storage media. Distributed file systems offer a series of advantages to users&lt;br /&gt;
(e.g., increased storage space and data reliability), but must be designed&lt;br /&gt;
such that end-users are not aware of all the logic and processing ocurring in&lt;br /&gt;
the background. Indeed, a distributed file system loses its appeal if the user&lt;br /&gt;
is required to do all the heavy lifting. &lt;br /&gt;
&lt;br /&gt;
Take for example, an end-user wanting to access a PDF document. In a local&lt;br /&gt;
file system, the user must only locate the PDF file in the file hierarchy, and&lt;br /&gt;
retrieve from disk. In a distributed file system, the PDF file might be&lt;br /&gt;
stored on a remote server, or perhaps stored multiple times on multiple&lt;br /&gt;
servers. The problem then becomes how to enable end-users to locate the&lt;br /&gt;
correct copy of a file amongst a large volume of shared data. &lt;br /&gt;
&lt;br /&gt;
This paper focuses on two important aspects of distributed file systems: (1)&lt;br /&gt;
how files are named or identified uniquely; and (2) how files are found by&lt;br /&gt;
clients or metadata servers once they are stored in the network. We survey&lt;br /&gt;
distributed file systems and file system designs from as early as 1989 and as&lt;br /&gt;
recently as 2006. We find that there are a relatively small number of ways a&lt;br /&gt;
distributed file system can approach the problem of naming and locating files,&lt;br /&gt;
and the selected approach is always dependent on the requirements of the&lt;br /&gt;
system. &lt;br /&gt;
&lt;br /&gt;
=Naming Resources=&lt;br /&gt;
On non-distributed systems (e.g., a stand-alone desktop computer), file systems&lt;br /&gt;
use&lt;br /&gt;
an object&#039;s absolute path as a unique identifier for that object in the file&lt;br /&gt;
system. This usually translates to meaning that there can&#039;t be two objects with&lt;br /&gt;
the same name in the same location (e.g., a directory&lt;br /&gt;
like =/home/dbarrera/files/=  can&#039;t contain two files called&lt;br /&gt;
=file1= ). In distributed file systems, there is an obvious need for&lt;br /&gt;
allowing multiple files with the same human-readable name, and perhaps even the&lt;br /&gt;
same absolute path (although relative to a particular client) as other clients&lt;br /&gt;
sharing storage on the system. This section reviews&lt;br /&gt;
methods used by existing distributed file systems to handle object naming at a&lt;br /&gt;
massive (sometimes global) scale. &lt;br /&gt;
&lt;br /&gt;
Depending on the requirements of the file system (maximum number of clients,&lt;br /&gt;
concurrent read/writes, etc.), different approaches to naming might be taken.&lt;br /&gt;
Some file systems such as Coda [10], aim to mimic the UNIX-like file&lt;br /&gt;
naming. Others systems have relaxed POSIX-like behaviour to allow for&lt;br /&gt;
better scalability and speed. &lt;br /&gt;
&lt;br /&gt;
==Local Naming==&lt;br /&gt;
The Sun Network File System (NFS) specifies that each client sees a UNIX file&lt;br /&gt;
namespace with a private root. Due to each client being free to manage&lt;br /&gt;
its own namespace, several workstations mounting the same remote directory&lt;br /&gt;
might not have the same view of the files contained in that directory. However,&lt;br /&gt;
if file-sharing or location transparency is required, it can be achieved by&lt;br /&gt;
convention (e.g., users agreeing on calling a file a specific name) rather than&lt;br /&gt;
by design. &lt;br /&gt;
&lt;br /&gt;
One of the first distributed file systems, the Apollo DOMAIN File System&lt;br /&gt;
[6] uses 64-bit unique identifiers (UIDs) for every object in the&lt;br /&gt;
system. Each Apollo client also has a UID created the time of its manufacture.&lt;br /&gt;
When a new file is created, the UID for that file is derived from the time and&lt;br /&gt;
UID of the file&#039;s workstation (this guarantees uniqueness of UIDs per fil&lt;br /&gt;
e without a&lt;br /&gt;
central server assigning them). &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses an internal 96-bit identifier for&lt;br /&gt;
uniquely identifying files. These identifiers are used in the background to&lt;br /&gt;
refer to files, but are never shown to users. Andrew clients see a partitioned&lt;br /&gt;
namespace comprised of a local and shared namespace. The shared namespace is&lt;br /&gt;
identical on all workstations, managed by a central server which can be&lt;br /&gt;
replicated. The local namespace is typically only used for files required to&lt;br /&gt;
boot an Andrew client, and to initialize the distributed client operation. &lt;br /&gt;
&lt;br /&gt;
==Cryptographic Naming==&lt;br /&gt;
OceanStore [5] stores objects at the lowest level by identifying&lt;br /&gt;
them with a&lt;br /&gt;
globally unique identifier (GUID). GUIDs are convenient in distributed&lt;br /&gt;
systems because they do not require a central authority to give them out. This&lt;br /&gt;
allows any client on the system to autonomously generate a valid GUID&lt;br /&gt;
with low probability of collisions (GUIDs are typically long bit strings e.g.,&lt;br /&gt;
more than 128 bits). At the same time, the benefit of an autonomous,&lt;br /&gt;
de-centralized namespace management allows for malicious clients to hijack&lt;br /&gt;
someone else&#039;s namespace and intentionally create collisions. To address this&lt;br /&gt;
issue, OceanStore uses a technique proposed by Mazieres et al. [7]&lt;br /&gt;
called&lt;br /&gt;
&#039;&#039;self-certifying path names&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
Self-certifying pathnames have all the benefits of public key cryptography&lt;br /&gt;
without the burden of key management, which is known to be difficult,&lt;br /&gt;
especially at a very large scale. One of the design goals of self-certifying&lt;br /&gt;
pathnames is for clients to cryptographically verify the contents of any file&lt;br /&gt;
on the network, without requiring exernal information. The novelty of this&lt;br /&gt;
approach is that file names inherently contain all information necessary to&lt;br /&gt;
communicate with remote servers. Essentially, an object&#039;s GUID is the secure&lt;br /&gt;
hash (SHA-1 or similar) of the object&#039;s owner&#039;s key and some human readable&lt;br /&gt;
name. By embedding a client key into the GUID, servers and other clients can&lt;br /&gt;
verify the identity and ownership of an object without querying a&lt;br /&gt;
third-party server.&lt;br /&gt;
&lt;br /&gt;
Freenet [2] also uses keypair-based naming but in a slightly&lt;br /&gt;
different way than OceanStore. Freenet identifies all files by a binary key&lt;br /&gt;
which is obtained by applying a hash function. There are three types of keys in&lt;br /&gt;
this distributed file system:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Keyword-signed key (KSK)&#039;&#039;&#039; This is the simplest identifier because it&lt;br /&gt;
is derived from an arbitrary text string chosen by the user who is storing the&lt;br /&gt;
file on the network. A user storing a PDF document might use the text string&lt;br /&gt;
&amp;quot;freenet/distributed/file/system&amp;quot; to describe the file. The string is used to&lt;br /&gt;
deterministically generate a private/public keypair. The public part of the key&lt;br /&gt;
is hashed and becomes the file identifier. &lt;br /&gt;
&lt;br /&gt;
We note that files can be recovered by guessing or bruteforcing the text&lt;br /&gt;
string. Also, nothing stops two different users from coming up with the same&lt;br /&gt;
descriptive string, and the second user&#039;s file would be rejected by the system,&lt;br /&gt;
as there would be a collision in the namespace.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Signed-subspace key (SSK)&#039;&#039;&#039; This method enables personal namespaces&lt;br /&gt;
for users. For this to work, users generate a public/private keypair using a&lt;br /&gt;
good random number generator. The user also creates a descriptive text string,&lt;br /&gt;
but in this case, it is XORed with the public key to generate the file key.&lt;br /&gt;
This method allows users to manage their own namespace (i.e., collisions can&lt;br /&gt;
still occur locally if the user picks the same string for two files). Users can&lt;br /&gt;
also&lt;br /&gt;
publish a list of keywords and a public key if they want to make those files&lt;br /&gt;
publicly available. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Content-hash key (CHK)&#039;&#039;&#039; In this method, the file key is derived by&lt;br /&gt;
hashing the contents of file. Files are also encrypted with a random encryption&lt;br /&gt;
key specific to that file. For others to retrieve the file, the owner makes&lt;br /&gt;
available the file hash along with the decryption key.&lt;br /&gt;
&lt;br /&gt;
==Hierarchical naming==&lt;br /&gt;
Cheriton et al. [1] suggest naming objects using a long&lt;br /&gt;
name which includes multiple pieces of information: (1) the resource&#039;s name&lt;br /&gt;
and location on the file server where it resides; (2) the organization where&lt;br /&gt;
that file server is located; and (3) a global administrative domain&lt;br /&gt;
representing all the organizations participating the distributed file system.&lt;br /&gt;
For example a file name of &amp;quot;[edu/standford/server4/bin/listdir&amp;quot;  is split&lt;br /&gt;
into:[edu (Gobal domain), /stanford/server4 (organization domain), and /bin/listdir (directory and file)&lt;br /&gt;
&lt;br /&gt;
This naming scheme gives clients all the necessary information (using only the&lt;br /&gt;
file name) to locate a file in a globally distributed file system. While this&lt;br /&gt;
may seem like a good solution, there a few inherent limitations to the&lt;br /&gt;
proposal.&lt;br /&gt;
&lt;br /&gt;
First, file replication and load balancing can only be done at the lowest level&lt;br /&gt;
(i.e., in the file server selected by the organization hosting the file). This&lt;br /&gt;
can lead to a bottleneck when multiple files in the same organization become&lt;br /&gt;
&amp;quot;hot&amp;quot;. The authors suggest using caching and multicast to improve performance&lt;br /&gt;
and avoid congestion on inter-organization links. Second, it requires all&lt;br /&gt;
organizations participating in the system to agree or regulate the common&lt;br /&gt;
namespace, much like the current Domain Name System (DNS). For this to work&lt;br /&gt;
there must be an organization in which each stakeholder in the system is&lt;br /&gt;
equally represented. While systems like these do exist currently (e.g.,&lt;br /&gt;
ICANN (The Internet Corporation for Assigned Names and Numbers (ICANN)&lt;br /&gt;
is a non-profit organization that represents regional registrars, the Internet&lt;br /&gt;
Engineering Task Force (IETF), Internet users and providers to help keep the&lt;br /&gt;
Internet secure, stable and inter-operable.)), they have large amounts of&lt;br /&gt;
administrative overhead and therefore limit the speed at which changes to&lt;br /&gt;
deployed implementations can take place. &lt;br /&gt;
&lt;br /&gt;
One advantage of the approach of Cheriton et al. is that names and directory&lt;br /&gt;
structures must only be unique within an organization/server. The system as a&lt;br /&gt;
whole does not have to keep track of every organization-level implementation,&lt;br /&gt;
yet different organizations should still be able to exchange data.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
The Google File System (GFS) [3] takes a different approach to&lt;br /&gt;
naming files. GFS assumes that all the clients communicate with a single master&lt;br /&gt;
server, who keeps a table mapping full pathnames to metadata (file locks and&lt;br /&gt;
location). The namespace is therefore centrally managed, and all clients must&lt;br /&gt;
register file operations with the master before they can be performed. While&lt;br /&gt;
this architecture has an obvious central point of failure (which can be&lt;br /&gt;
addressed by replication), it has the advantage of not having to deal with a&lt;br /&gt;
distributed namespace. This central design also has the advantage of improving&lt;br /&gt;
data consistency across multi-level distribution nodes. It also allows data&lt;br /&gt;
to be moved to optimal nodes to increase performance or distribute load. It&#039;s&lt;br /&gt;
worth noting that lookup tables are a fundamentally different way to find&lt;br /&gt;
contents in a directory as compared to UNIX &#039;&#039;inodes&#039;&#039;  and related data&lt;br /&gt;
structures. This approach has inherent limitations such as not being able to&lt;br /&gt;
support symlinks .&lt;br /&gt;
&lt;br /&gt;
Ceph [11] client nodes use near-POSIX file system interfaces which are&lt;br /&gt;
relayed back to a central metadata cluster. The metadata cluster is responsible&lt;br /&gt;
for managing the system-wide namespace, coordinating security and verifying&lt;br /&gt;
consistency. Ceph decouples data from metadata which enables the system to also&lt;br /&gt;
distribute metadata servers themselves. The metadata servers store pointers to&lt;br /&gt;
&amp;quot;object-storage clusters&amp;quot; which hold the actual data portion of the file. The&lt;br /&gt;
metadata servers also handle file read and write operations, which then&lt;br /&gt;
redirect clients to the appropriate object storage cluster or device. &lt;br /&gt;
&lt;br /&gt;
=Locating Resources=&lt;br /&gt;
&lt;br /&gt;
==Local File Systems==&lt;br /&gt;
In some distributed systems, files are copied locally and replicated to remote&lt;br /&gt;
servers in the background. NFS [9] is one example where clients&lt;br /&gt;
mount the remote file system locally. The remote directory structure is mapped&lt;br /&gt;
on to a local namespace which makes files transparently accessible to&lt;br /&gt;
clients. In this scheme, there is no need for distributing indexes or metadata,&lt;br /&gt;
since all files appear to be local. A client can find files on the&lt;br /&gt;
&amp;quot;distributed&amp;quot; file system in the same way local files are found.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
File systems  that use lookup tables for storing the&lt;br /&gt;
location and&lt;br /&gt;
metadatada of files (e.g., [3,11]) can locate resources trivially&lt;br /&gt;
by&lt;br /&gt;
querying the lookup table. The table usually contains a pointer to either the&lt;br /&gt;
file itself or a server hosting that file who can in turn handle the file&lt;br /&gt;
operation request. &lt;br /&gt;
&lt;br /&gt;
A very basic implementation of a metadata lookup is used in the Apollo Domain&lt;br /&gt;
File System [6]. A central name server maps client-readable strings&lt;br /&gt;
(e.g., &amp;quot;/home/dbarrera/file1&amp;quot; ) to UIDs. The name server can be&lt;br /&gt;
distributed by replicating it a multiple locations, allowing clients to query&lt;br /&gt;
the nearest server instead of a central one. &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses unique file identifiers to &lt;br /&gt;
populate a &#039;&#039;location database&#039;&#039;  on the central server which maps file&lt;br /&gt;
identifiers to locations. The server is therefore responsible for forwarding&lt;br /&gt;
file access requests to the correct client hosting that file.&lt;br /&gt;
&lt;br /&gt;
==Distributed Index Search==&lt;br /&gt;
Systems like Freenet [2] by design want to make it difficult for&lt;br /&gt;
unauthorized users to access restricted files. This is a difficult problem,&lt;br /&gt;
since the system aims to be highly distributed, but at the same time provide&lt;br /&gt;
guarantees that files won&#039;t be read or modified by unauthorized third-parties.&lt;br /&gt;
However, Freenet has developed an interesting approach to locating files: when&lt;br /&gt;
a file is requested from the network, a user must first obtain or calculate the&lt;br /&gt;
file key. The user&#039;s node requests that file&lt;br /&gt;
from neighboring nodes, who in turn check if the file is stored locally, and if&lt;br /&gt;
not forward the request to the next nearest neighbor. If a node cannot forward&lt;br /&gt;
a request any longer (because a loop would be created or all nodes have&lt;br /&gt;
already been queried), then a failure message is transmitted back to the&lt;br /&gt;
previous node. If a file is found at some point along the request path,&lt;br /&gt;
then the file is sent back through all the intermediate nodes until it reaches&lt;br /&gt;
the request originator, which allows these intermediate nodes to keep a copy of&lt;br /&gt;
the file as a cache. The next time that file key is requested, a node which is&lt;br /&gt;
closer might have it, which will increase the retrieval speed. Nodes&lt;br /&gt;
&amp;quot;forget&amp;quot; about cached copies of files in a least recently used (LRU) manner,&lt;br /&gt;
allowing the network to automatically  balance load and use available space&lt;br /&gt;
optimally. &lt;br /&gt;
&lt;br /&gt;
Distributing a file index was proposed Plaxton et al. [8] as well.&lt;br /&gt;
Their proposal however attempts have all nodes in the network maintain a&lt;br /&gt;
&#039;&#039;virtual tree&#039;&#039; . The tree information is distributed such that each node&lt;br /&gt;
knows about copies of files residing on itself and all nodes that form the&lt;br /&gt;
subtree rooted at that node. All nodes are constantly being updated with&lt;br /&gt;
neighbor information, meaning that new nodes slowly obtain tree information to&lt;br /&gt;
become the roots of their subtrees. This method has the advantage of&lt;br /&gt;
distributing load and providing a hierarchical search functionality that can&lt;br /&gt;
use well known algorithms (BFS, DFS) to find resources on a network.&lt;br /&gt;
&lt;br /&gt;
==Pseudo-random Data Distribution==&lt;br /&gt;
Ceph [11] distributes data through a method that maximizes bandwidth and&lt;br /&gt;
efficiently uses storage resources. Ceph also avoids data imbalance (e.g.,&lt;br /&gt;
new devices are under-used) and load-asymmetries (e.g., often requested data&lt;br /&gt;
placed on only new devices) with a globally known algorithm called CRUSH&lt;br /&gt;
(Controlled Replication Under Scalable Hashing). By using a predefined number&lt;br /&gt;
of &#039;&#039;placement groups&#039;&#039;  (the smallest unit of object storage groups), the&lt;br /&gt;
CRUSH algorithm stores and replicates data across the network in a&lt;br /&gt;
pseudo-random way. This algorithm tells the metadata servers both where the&lt;br /&gt;
data should be stored and where it can be found later, which helps clients and&lt;br /&gt;
metadata servers in locating resources. &lt;br /&gt;
&lt;br /&gt;
=Conclusions=&lt;br /&gt;
This paper has presented a brief survey of distributed file system research&lt;br /&gt;
conducted over the past 20 years. A wide range of distributed file systems have&lt;br /&gt;
been designed to have varying levels of scalability, usability and efficiency.&lt;br /&gt;
Depending on the requirements of a distributed file system, different approaches&lt;br /&gt;
may be taken to address two main concerns: file naming and file retrieval.&lt;br /&gt;
Unfortunately there is no clear winner in either of these categories, which&lt;br /&gt;
means that selecting the &amp;quot;right&amp;quot; method for a given file system will always&lt;br /&gt;
depend on the requirements and users of that system.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] D. R. Cheriton and T. P. Mann. Decentralizing a global naming service for improved performance and&lt;br /&gt;
    fault tolerance. ACM Transactions on Computer Systems, 7:147–183, 1989.&lt;br /&gt;
[2] I. Clarke, O. Sandberg, B. Wiley, and T. Hong. Freenet: A distributed anonymous information storage&lt;br /&gt;
    and retrieval system. In Designing Privacy Enhancing Technologies, pages 46–66. Springer, 2001.&lt;br /&gt;
[3] S. Ghemawat, H. Gobioﬀ, and S. Leung. The Google ﬁle system. ACM SIGOPS Operating Systems&lt;br /&gt;
    Review, 37(5):29–43, 2003.&lt;br /&gt;
[4] J. Howard and C.-M. U. I. T. Center. An overview of the Andrew ﬁle system. Citeseer, 1988.&lt;br /&gt;
[5] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weath-&lt;br /&gt;
    erspoon, C. Wells, et al. Oceanstore: An architecture for global-scale persistent storage. ACM SIGARCH&lt;br /&gt;
    Computer Architecture News, 28(5):190–201, 2000.&lt;br /&gt;
[6] P. Levine. The Apollo DOMAIN Distributed File System. NATO ASI Series: Theory and Practice of&lt;br /&gt;
    Distributed Operating Systems, Y. Paker, JP. Banatre, M. Bozyi git, pages 241–260.&lt;br /&gt;
[7] D. Mazieres, M. Kaminsky, M. Kaashoek, and E. Witchel. Separating key management from ﬁle system&lt;br /&gt;
    security. ACM SIGOPS Operating Systems Review, 33(5):124–139, 1999.&lt;br /&gt;
[8] C. G. Plaxton, R. Rajaraman, A. W. Richa, and A. W. Richa. Accessing nearby copies of replicated&lt;br /&gt;
    objects in a distributed environment. pages 311–320, 1997.&lt;br /&gt;
[9] M. Satyanarayanan. A survey of distributed ﬁle systems. Annual Review of Computer Science, 4(1):73–&lt;br /&gt;
    104, 1990.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7461</id>
		<title>DistOS-2011W Naming and Locating Objects in Distributed Systems</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Naming_and_Locating_Objects_in_Distributed_Systems&amp;diff=7461"/>
		<updated>2011-02-25T18:18:40Z</updated>

		<summary type="html">&lt;p&gt;Dbarrera: /* Distributed Index Search */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
This paper is a survey of existing approaches to naming and locating&lt;br /&gt;
resources in distributed file systems. We survey proposals from the past 20&lt;br /&gt;
years and find that while there have been many improvements in the hardware&lt;br /&gt;
that powers distributed file systems, there are only a few well known&lt;br /&gt;
proposals for dealing with resource location an naming.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
The ability to name resources is important in any file system. Mapping machine&lt;br /&gt;
readable names to human readable names allows users to forget about the way&lt;br /&gt;
the operating system (OS) is handling file access, and focus on completing&lt;br /&gt;
desired tasks. &lt;br /&gt;
&lt;br /&gt;
In traditional file systems, users are mostly responsible for creating&lt;br /&gt;
meaningful file hierarchies for storing and later searching for files. Users &lt;br /&gt;
must be aware of file system restrictions (e.g., file name length, file size,&lt;br /&gt;
etc.). The&lt;br /&gt;
underlying file system is only in charge of moving data to or from physical&lt;br /&gt;
storage media. Distributed file systems offer a series of advantages to users&lt;br /&gt;
(e.g., increased storage space and data reliability), but must be designed&lt;br /&gt;
such that end-users are not aware of all the logic and processing ocurring in&lt;br /&gt;
the background. Indeed, a distributed file system loses its appeal if the user&lt;br /&gt;
is required to do all the heavy lifting. &lt;br /&gt;
&lt;br /&gt;
Take for example, an end-user wanting to access a PDF document. In a local&lt;br /&gt;
file system, the user must only locate the PDF file in the file hierarchy, and&lt;br /&gt;
retrieve from disk. In a distributed file system, the PDF file might be&lt;br /&gt;
stored on a remote server, or perhaps stored multiple times on multiple&lt;br /&gt;
servers. The problem then becomes how to enable end-users to locate the&lt;br /&gt;
correct copy of a file amongst a large volume of shared data. &lt;br /&gt;
&lt;br /&gt;
This paper focuses on two important aspects of distributed file systems: (1)&lt;br /&gt;
how files are named or identified uniquely; and (2) how files are found by&lt;br /&gt;
clients or metadata servers once they are stored in the network. We survey&lt;br /&gt;
distributed file systems and file system designs from as early as 1989 and as&lt;br /&gt;
recently as 2006. We find that there are a relatively small number of ways a&lt;br /&gt;
distributed file system can approach the problem of naming and locating files,&lt;br /&gt;
and the selected approach is always dependent on the requirements of the&lt;br /&gt;
system. &lt;br /&gt;
&lt;br /&gt;
=Naming Resources=&lt;br /&gt;
On non-distributed systems (e.g., a stand-alone desktop computer), file systems&lt;br /&gt;
use&lt;br /&gt;
an object&#039;s absolute path as a unique identifier for that object in the file&lt;br /&gt;
system. This usually translates to meaning that there can&#039;t be two objects with&lt;br /&gt;
the same name in the same location (e.g., a directory&lt;br /&gt;
like =/home/dbarrera/files/=  can&#039;t contain two files called&lt;br /&gt;
=file1= ). In distributed file systems, there is an obvious need for&lt;br /&gt;
allowing multiple files with the same human-readable name, and perhaps even the&lt;br /&gt;
same absolute path (although relative to a particular client) as other clients&lt;br /&gt;
sharing storage on the system. This section reviews&lt;br /&gt;
methods used by existing distributed file systems to handle object naming at a&lt;br /&gt;
massive (sometimes global) scale. &lt;br /&gt;
&lt;br /&gt;
Depending on the requirements of the file system (maximum number of clients,&lt;br /&gt;
concurrent read/writes, etc.), different approaches to naming might be taken.&lt;br /&gt;
Some file systems such as Coda [10], aim to mimic the UNIX-like file&lt;br /&gt;
naming. Others systems have relaxed POSIX-like behaviour to allow for&lt;br /&gt;
better scalability and speed. &lt;br /&gt;
&lt;br /&gt;
==Local Naming==&lt;br /&gt;
The Sun Network File System (NFS) specifies that each client sees a UNIX file&lt;br /&gt;
namespace with a private root. Due to each client being free to manage&lt;br /&gt;
its own namespace, several workstations mounting the same remote directory&lt;br /&gt;
might not have the same view of the files contained in that directory. However,&lt;br /&gt;
if file-sharing or location transparency is required, it can be achieved by&lt;br /&gt;
convention (e.g., users agreeing on calling a file a specific name) rather than&lt;br /&gt;
by design. &lt;br /&gt;
&lt;br /&gt;
One of the first distributed file systems, the Apollo DOMAIN File System&lt;br /&gt;
[6] uses 64-bit unique identifiers (UIDs) for every object in the&lt;br /&gt;
system. Each Apollo client also has a UID created the time of its manufacture.&lt;br /&gt;
When a new file is created, the UID for that file is derived from the time and&lt;br /&gt;
UID of the file&#039;s workstation (this guarantees uniqueness of UIDs per fil&lt;br /&gt;
e without a&lt;br /&gt;
central server assigning them). &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses an internal 96-bit identifier for&lt;br /&gt;
uniquely identifying files. These identifiers are used in the background to&lt;br /&gt;
refer to files, but are never shown to users. Andrew clients see a partitioned&lt;br /&gt;
namespace comprised of a local and shared namespace. The shared namespace is&lt;br /&gt;
identical on all workstations, managed by a central server which can be&lt;br /&gt;
replicated. The local namespace is typically only used for files required to&lt;br /&gt;
boot an Andrew client, and to initialize the distributed client operation. &lt;br /&gt;
&lt;br /&gt;
==Cryptographic Naming==&lt;br /&gt;
OceanStore [5] stores objects at the lowest level by identifying&lt;br /&gt;
them with a&lt;br /&gt;
globally unique identifier (GUID). GUIDs are convenient in distributed&lt;br /&gt;
systems because they do not require a central authority to give them out. This&lt;br /&gt;
allows any client on the system to autonomously generate a valid GUID&lt;br /&gt;
with low probability of collisions (GUIDs are typically long bit strings e.g.,&lt;br /&gt;
more than 128 bits). At the same time, the benefit of an autonomous,&lt;br /&gt;
de-centralized namespace management allows for malicious clients to hijack&lt;br /&gt;
someone else&#039;s namespace and intentionally create collisions. To address this&lt;br /&gt;
issue, OceanStore uses a technique proposed by Mazieres et al. [7]&lt;br /&gt;
called&lt;br /&gt;
&#039;&#039;self-certifying path names&#039;&#039; .&lt;br /&gt;
&lt;br /&gt;
Self-certifying pathnames have all the benefits of public key cryptography&lt;br /&gt;
without the burden of key management, which is known to be difficult,&lt;br /&gt;
especially at a very large scale. One of the design goals of self-certifying&lt;br /&gt;
pathnames is for clients to cryptographically verify the contents of any file&lt;br /&gt;
on the network, without requiring exernal information. The novelty of this&lt;br /&gt;
approach is that file names inherently contain all information necessary to&lt;br /&gt;
communicate with remote servers. Essentially, an object&#039;s GUID is the secure&lt;br /&gt;
hash (SHA-1 or similar) of the object&#039;s owner&#039;s key and some human readable&lt;br /&gt;
name. By embedding a client key into the GUID, servers and other clients can&lt;br /&gt;
verify the identity and ownership of an object without querying a&lt;br /&gt;
third-party server.&lt;br /&gt;
&lt;br /&gt;
Freenet [2] also uses keypair-based naming but in a slightly&lt;br /&gt;
different way than OceanStore. Freenet identifies all files by a binary key&lt;br /&gt;
which is obtained by applying a hash function. There are three types of keys in&lt;br /&gt;
this distributed file system:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Keyword-signed key (KSK)&#039;&#039;&#039; This is the simplest identifier because it&lt;br /&gt;
is derived from an arbitrary text string chosen by the user who is storing the&lt;br /&gt;
file on the network. A user storing a PDF document might use the text string&lt;br /&gt;
&amp;quot;freenet/distributed/file/system&amp;quot; to describe the file. The string is used to&lt;br /&gt;
deterministically generate a private/public keypair. The public part of the key&lt;br /&gt;
is hashed and becomes the file identifier. &lt;br /&gt;
&lt;br /&gt;
We note that files can be recovered by guessing or bruteforcing the text&lt;br /&gt;
string. Also, nothing stops two different users from coming up with the same&lt;br /&gt;
descriptive string, and the second user&#039;s file would be rejected by the system,&lt;br /&gt;
as there would be a collision in the namespace.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Signed-subspace key (SSK)&#039;&#039;&#039; This method enables personal namespaces&lt;br /&gt;
for users. For this to work, users generate a public/private keypair using a&lt;br /&gt;
good random number generator. The user also creates a descriptive text string,&lt;br /&gt;
but in this case, it is XORed with the public key to generate the file key.&lt;br /&gt;
This method allows users to manage their own namespace (i.e., collisions can&lt;br /&gt;
still occur locally if the user picks the same string for two files). Users can&lt;br /&gt;
also&lt;br /&gt;
publish a list of keywords and a public key if they want to make those files&lt;br /&gt;
publicly available. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Content-hash key (CHK)&#039;&#039;&#039; In this method, the file key is derived by&lt;br /&gt;
hashing the contents of file. Files are also encrypted with a random encryption&lt;br /&gt;
key specific to that file. For others to retrieve the file, the owner makes&lt;br /&gt;
available the file hash along with the decryption key.&lt;br /&gt;
&lt;br /&gt;
==Hierarchical naming==&lt;br /&gt;
Cheriton et al. [1] suggest naming objects using a long&lt;br /&gt;
name which includes multiple pieces of information: (1) the resource&#039;s name&lt;br /&gt;
and location on the file server where it resides; (2) the organization where&lt;br /&gt;
that file server is located; and (3) a global administrative domain&lt;br /&gt;
representing all the organizations participating the distributed file system.&lt;br /&gt;
For example a file name of &amp;quot;[edu/standford/server4/bin/listdir&amp;quot;  is split&lt;br /&gt;
into:[edu (Gobal domain), /stanford/server4 (organization domain), and /bin/listdir (directory and file)&lt;br /&gt;
&lt;br /&gt;
This naming scheme gives clients all the necessary information (using only the&lt;br /&gt;
file name) to locate a file in a globally distributed file system. While this&lt;br /&gt;
may seem like a good solution, there a few inherent limitations to the&lt;br /&gt;
proposal.&lt;br /&gt;
&lt;br /&gt;
First, file replication and load balancing can only be done at the lowest level&lt;br /&gt;
(i.e., in the file server selected by the organization hosting the file). This&lt;br /&gt;
can lead to a bottleneck when multiple files in the same organization become&lt;br /&gt;
&amp;quot;hot&amp;quot;. The authors suggest using caching and multicast to improve performance&lt;br /&gt;
and avoid congestion on inter-organization links. Second, it requires all&lt;br /&gt;
organizations participating in the system to agree or regulate the common&lt;br /&gt;
namespace, much like the current Domain Name System (DNS). For this to work&lt;br /&gt;
there must be an organization in which each stakeholder in the system is&lt;br /&gt;
equally represented. While systems like these do exist currently (e.g.,&lt;br /&gt;
ICANN (The Internet Corporation for Assigned Names and Numbers (ICANN)&lt;br /&gt;
is a non-profit organization that represents regional registrars, the Internet&lt;br /&gt;
Engineering Task Force (IETF), Internet users and providers to help keep the&lt;br /&gt;
Internet secure, stable and inter-operable.)), they have large amounts of&lt;br /&gt;
administrative overhead and therefore limit the speed at which changes to&lt;br /&gt;
deployed implementations can take place. &lt;br /&gt;
&lt;br /&gt;
One advantage of the approach of Cheriton et al. is that names and directory&lt;br /&gt;
structures must only be unique within an organization/server. The system as a&lt;br /&gt;
whole does not have to keep track of every organization-level implementation,&lt;br /&gt;
yet different organizations should still be able to exchange data.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
The Google File System (GFS) [3] takes a different approach to&lt;br /&gt;
naming files. GFS assumes that all the clients communicate with a single master&lt;br /&gt;
server, who keeps a table mapping full pathnames to metadata (file locks and&lt;br /&gt;
location). The namespace is therefore centrally managed, and all clients must&lt;br /&gt;
register file operations with the master before they can be performed. While&lt;br /&gt;
this architecture has an obvious central point of failure (which can be&lt;br /&gt;
addressed by replication), it has the advantage of not having to deal with a&lt;br /&gt;
distributed namespace. This central design also has the advantage of improving&lt;br /&gt;
data consistency across multi-level distribution nodes. It also allows data&lt;br /&gt;
to be moved to optimal nodes to increase performance or distribute load. It&#039;s&lt;br /&gt;
worth noting that lookup tables are a fundamentally different way to find&lt;br /&gt;
contents in a directory as compared to UNIX &#039;&#039;inodes&#039;&#039;  and related data&lt;br /&gt;
structures. This approach has inherent limitations such as not being able to&lt;br /&gt;
support symlinks .&lt;br /&gt;
&lt;br /&gt;
Ceph [11] client nodes use near-POSIX file system interfaces which are&lt;br /&gt;
relayed back to a central metadata cluster. The metadata cluster is responsible&lt;br /&gt;
for managing the system-wide namespace, coordinating security and verifying&lt;br /&gt;
consistency. Ceph decouples data from metadata which enables the system to also&lt;br /&gt;
distribute metadata servers themselves. The metadata servers store pointers to&lt;br /&gt;
&amp;quot;object-storage clusters&amp;quot; which hold the actual data portion of the file. The&lt;br /&gt;
metadata servers also handle file read and write operations, which then&lt;br /&gt;
redirect clients to the appropriate object storage cluster or device. &lt;br /&gt;
&lt;br /&gt;
=Locating Resources=&lt;br /&gt;
&lt;br /&gt;
==Local File Systems==&lt;br /&gt;
In some distributed systems, files are copied locally and replicated to remote&lt;br /&gt;
servers in the background. NFS [9] is one example where clients&lt;br /&gt;
mount the remote file system locally. The remote directory structure is mapped&lt;br /&gt;
on to a local namespace which makes files transparently accessible to&lt;br /&gt;
clients. In this scheme, there is no need for distributing indexes or metadata,&lt;br /&gt;
since all files appear to be local. A client can find files on the&lt;br /&gt;
&amp;quot;distributed&amp;quot; file system in the same way local files are found.&lt;br /&gt;
&lt;br /&gt;
==Metadata Servers==&lt;br /&gt;
File systems  that use lookup tables for storing the&lt;br /&gt;
location and&lt;br /&gt;
metadatada of files (e.g., [3,11]) can locate resources trivially&lt;br /&gt;
by&lt;br /&gt;
querying the lookup table. The table usually contains a pointer to either the&lt;br /&gt;
file itself or a server hosting that file who can in turn handle the file&lt;br /&gt;
operation request. &lt;br /&gt;
&lt;br /&gt;
A very basic implementation of a metadata lookup is used in the Apollo Domain&lt;br /&gt;
File System [6]. A central name server maps client-readable strings&lt;br /&gt;
(e.g., &amp;quot;/home/dbarrera/file1&amp;quot; ) to UIDs. The name server can be&lt;br /&gt;
distributed by replicating it a multiple locations, allowing clients to query&lt;br /&gt;
the nearest server instead of a central one. &lt;br /&gt;
&lt;br /&gt;
The Andrew file system [4] uses unique file identifiers to &lt;br /&gt;
populate a &#039;&#039;location database&#039;&#039;  on the central server which maps file&lt;br /&gt;
identifiers to locations. The server is therefore responsible for forwarding&lt;br /&gt;
file access requests to the correct client hosting that file.&lt;br /&gt;
&lt;br /&gt;
==Distributed Index Search==&lt;br /&gt;
Systems like Freenet [2] by design want to make it difficult for&lt;br /&gt;
unauthorized users to access restricted files. This is a difficult problem,&lt;br /&gt;
since the system aims to be highly distributed, but at the same time provide&lt;br /&gt;
guarantees that files won&#039;t be read or modified by unauthorized third-parties.&lt;br /&gt;
However, Freenet has developed an interesting approach to locating files: when&lt;br /&gt;
a file is requested from the network, a user must first obtain or calculate the&lt;br /&gt;
file key. The user&#039;s node requests that file&lt;br /&gt;
from neighboring nodes, who in turn check if the file is stored locally, and if&lt;br /&gt;
not forward the request to the next nearest neighbor. If a node cannot forward&lt;br /&gt;
a request any longer (because a loop would be created or all nodes have&lt;br /&gt;
already been queried), then a failure message is transmitted back to the&lt;br /&gt;
previous node. If a file is found at some point along the request path,&lt;br /&gt;
then the file is sent back through all the intermediate nodes until it reaches&lt;br /&gt;
the request originator, which allows these intermediate nodes to keep a copy of&lt;br /&gt;
the file as a cache. The next time that file key is requested, a node which is&lt;br /&gt;
closer might have it, which will increase the retrieval speed. Nodes&lt;br /&gt;
&amp;quot;forget&amp;quot; about cached copies of files in a least recently used (LRU) manner,&lt;br /&gt;
allowing the network to automatically  balance load and use available space&lt;br /&gt;
optimally. &lt;br /&gt;
&lt;br /&gt;
Distributing a file index was proposed Plaxton et al. [8] as well.&lt;br /&gt;
Their proposal however attempts have all nodes in the network maintain a&lt;br /&gt;
&#039;&#039;virtual tree&#039;&#039; . The tree information is distributed such that each node&lt;br /&gt;
knows about copies of files residing on itself and all nodes that form the&lt;br /&gt;
subtree rooted at that node. All nodes are constantly being updated with&lt;br /&gt;
neighbor information, meaning that new nodes slowly obtain tree information to&lt;br /&gt;
become the roots of their subtrees. This method has the advantage of&lt;br /&gt;
distributing load and providing a hierarchical search functionality that can&lt;br /&gt;
use well known algorithms (BFS, DFS) to find resources on a network.&lt;br /&gt;
&lt;br /&gt;
==Pseudo-random Data Distribution==&lt;br /&gt;
Ceph [11] distributes data through a method that maximizes bandwidth and&lt;br /&gt;
efficiently uses storage resources. Ceph also avoids data imbalance (e.g.,&lt;br /&gt;
new devices are under-used) and load-asymmetries (e.g., often requested data&lt;br /&gt;
placed on only new devices) with a globally known algorithm called CRUSH&lt;br /&gt;
(Controlled Replication Under Scalable Hashing). By using a predefined number&lt;br /&gt;
of &#039;&#039;placement groups&#039;&#039;  (the smallest unit of object storage groups), the&lt;br /&gt;
CRUSH algorithm stores and replicates data across the network in a&lt;br /&gt;
pseudo-random way. This algorithm tells the metadata servers both where the&lt;br /&gt;
data should be stored and where it can be found later, which helps clients and&lt;br /&gt;
metadata servers in locating resources. &lt;br /&gt;
&lt;br /&gt;
=Conclusions=&lt;br /&gt;
This paper has presented a brief survey of distributed file system research&lt;br /&gt;
conducted over the past 20 years. A wide range of distributed file systems have&lt;br /&gt;
been designed to have varying levels of scalability, usability and efficiency.&lt;br /&gt;
Depending on the requirements of a distributed file system, different approaches&lt;br /&gt;
may be taken to address two main concerns: file naming and file retrieval.&lt;br /&gt;
Unfortunately there is no clear winner in either of these categories, which&lt;br /&gt;
means that selecting the &amp;quot;right&amp;quot; method for a given file system will always&lt;br /&gt;
depend on the requirements and users of that system.&lt;/div&gt;</summary>
		<author><name>Dbarrera</name></author>
	</entry>
</feed>