Difference between revisions of "DistOS-2011W Justice"

From Soma-notes
Jump to navigation Jump to search
 
(29 intermediate revisions by 3 users not shown)
Line 5: Line 5:
* David Barrera
* David Barrera


Note: research so far moved to Discussion section.
[[https://docs.google.com/present/edit?id=0AQJ2IGOeo68XZGhuNnJ0YjRfM2doZDg3Ymc5&hl=en&authkey=CK7Mk4YO Presentation]]


=Abstract=
=Abstract=
Line 11: Line 11:
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence.  
The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence.  


This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four deviant acts; comment spam, unauthorized access, denial of service, and phishing attacks.
This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four malicious acts; comment spam, denial of service, and phishing attacks.
<br/>
<br/>
<br/>
<br/>
Line 111: Line 111:
===Overview===
===Overview===


The implementation described in this section takes into consideration the above discussion involving justice, and applying it to the management of a computer network. The implementation is viewed as being an incrementally deployable justice system which we refer to as "Justice Web".
The implementation described in this section takes the above discussion involving justice, and applies it to the management of a computer network. The implementation is designed to be incrementally-deployable, so that it would be realistic for a network to use the proposed system. The implementation is entitled the “Justice Web”.  


At a high level, the Justice Web is essentially a computer network that applies a morality rating to those inside the network, as well as connections coming from outside the network. The morality rating is based on previous actions taken by the entity in question, and the rating is decided by the networks's operator(s). It is up to the individual nodes within the network to enforce restrictions based on the morality rating master list, which is distributed throughout the network.
The purpose of the Justice Web is to protect public-facing services from attacks coming from outside the network. This is accomplished by keeping a record of the criminal acts made by connections, and allowing the services access to these records. Criminal acts in this case are actions done by a connection that is considered harmful to the network. The record kept by the network is a “Morality Rating”, which is an integer meant to reflect the severity of the crime committed.


In order to implement the Justice Web, certain assumptions must be made:
===Assumptions===


* It is possible to uniquely identify all computers attempting a connection to the network in question regardless of where they reside (inside or outside the Justice Web)
Certain assumptions must be made regarding the other class projects in order for this implementation to be deployable. Most importantly, it is assumed that there is some way in which the network can uniquely identify a computer that connects to the network. This allows the Justice Web to keep a criminal log of clients, and recognize if an offender is attempting to connect.
* Probably more.


===Morality Rating===
===Morality Rating===
A ''morality rating'' is a numeric value that represents a host's historical adherence (or lack thereof) to a set of predefined rules. Each host begins with a predefined morality rating, and then earns or loses "points" according to its behaviour. For example, a host might have its morality rating increased if it makes available bandwidth to other hosts on the network. The [http://homeostasis.scs.carleton.ca/wiki/index.php/DistOS-2011W_Observability_%26_Contracts Observability and Contracts] team suggests abstracting the idea to simply giving or removing points from hosts that honor a contract.


The morality rating determines a user's ability to access to a service based on that service provider's rulings. For example, a server within the Justice Web could have a rule which specifies that any incoming connections from hosts that have a morality rating below -100 are not allowed to connect. Conversely, users within a Justice Web use their own morality rating to gain access to resources publicly available within the network.
Morality Rating (MR) is an integer assigned to computers that have connected to a service within the Justice Web. The purpose of the MR is to keep track of a computer’s past offenses, and allow services to restrict access using thresholds. For instance, a service within the Justice Web could restrict access to those above -100 MR.


Morality ratings are local to each network running the Justice Web. Therefore, a computer that has a low morality in one network might have a good morality in another. Indeed, each Justice Web may have a different set of rules which may overlap or be entirely different to another Justice Web. This is similar to real-world justice, where an action may be considered criminal in one country, but not in another.
While the primary purpose of the Justice Web is to protect against attackers from outside the network, every node in the Justice Web is assigned an MR, which increases and decreases based on their actions within the network. Ideally, those with higher MR are allowed access to more shared resources, though this would be implementation specific.


====Master List====
The MR assigned to a computer is local to the Justice Web that assigned the rating. For example, if two separate networks deploy a Justice Web, the ratings they assign do not affect the other network’s ratings.


The ''master list'' is a database that stores the morality ratings of a hosts in the Justice Web. Considering the list may grow to a large size (as more hosts are added), list storage becomes important. To avoid burdening a central server (local to the Justice Web), we envision a subset of the master list (which we call a ''slave List'') being copied to other hosts. The mirroring logic could  
===Judges===
 
In order to assign MR to offenders, an authority figure is needed to declare if a crime has been committed. In the Justice Web, this role is taken by the Judges, who may be one or more computers within the network. It is the Judges’ responsibility to create the rules of the network, gather the evidence when a claim is made, declare if a crime has been committed, and assign a new MR based on the ruling.
 
How a Judge is picked isn’t set in stone, but in general it would be the node(s) in the network with the highest MR. Alternatively, the Judges could be picked through some democratic process.


which mirrors a subset of the entries in the master list. The subset is dependent on the rules implemented by the server. For instance, if a server decides to deny any connections with an MR: -100, all entries that have MR: -100 would be stored in the slave list.
The judgments made are mostly automated, based on the rules of the network. However, it can be specified that certain crimes, such as a claim of a phishing scam being committed, be dealt with by a human.


===Judges===
===Master List===


A Justice Web needs one or more judges to function. When an offense occurs, a node in the network submits a claim, along with an encrypted evidence log and ID of the offender. If the evidence is sufficient, the judge changes the MR of the offender based on the severity of the offense.
The Justice Web is a virtual network, in that the nodes are not necessarily connected or even anywhere near each other. Because of this, it would be inconvenient and potentially harmful to have services look up a computer’s MR on every connection attempt. To prevent this, MR will be stored in a central location, but propagated throughout the network.


A judge is appointed by default as the node in the network with the highest MR (Usually the owner), though this option is implementation specific. Alternatively, a ring of judges is appointed and given the power to lower the MR of other judges, but the evidence must be validated by a majority of the judges. This would be implemented as an attempt to keep the judges in check.
This is done using a master-slave approach to database replication. The Judges of the network store the “Master List”, and propagate the data to the “Slave Lists” stored by the services within the network. The records stored by the Slave Lists is decided by the thresholds that the specific service has put in place. As mentioned in the Morality Rating subsection, a service can set thresholds to determine if a computer should be allowed access. In the example, an MR of -100 would be blocked from the service. If a service were to have only this threshold in place, it would only need to be aware of computers with -100 MR, and so would only store that data in its Slave List. Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].


===Rules===
===Rules===


Each Justice Web specifies its own set of rules. A rule consists of three parts: the offense, what level of proof is needed, and the severity of the punishment. The creation of these rules is by default left up to the judges. As stated above, the rules are used by the system during the judgement of claims.
Judges define and use rules to determine whether a crime has been committed. A rule consists of three parts: The offense, the proof needed, and the severity of the punishment. The offense is a name assigned to the crime, which services can claim has been committed. The proof is the required information for the judges to be able to make a conviction. The severity of the punishment is an integer value to negate from the offender’s current MR.
 
Each network deploying a Justice Web specifies their own set of rules. These rules are made available to the public so that services within the network are aware of the crimes they can report. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., [http://laws-lois.justice.gc.ca/eng/acts/C-46/ the criminal code of Canada]).
 
===Evidence===


===Evidence Logs===
Evidence is used by the Justice Web to determine if a crime has been committed. Evidence is stored in encrypted logs located on a service’s computer, and submitted to the judges when a claim is made.


Evidence logs are stored on each node of the Justice Web, and they keep track of the most recent network activity of that computer. the logs are encrypted to ensure that the computer making the claim did not tamper with the evidence. Upon being retrieved by the Judges, the logs are decrypted and processed. Depending on the network, the evidence made available may or may not be sufficient to justify the claim.
Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed.
The type of evidence required is varied, and is defined by the Judges of a network. For a DDoS attack, the Justice Web would potential be able to look at the evidence logs and determine which computers were actively involved in the attack, and which was legitimate traffic through the analysis of statistical evidence[17].


===Membership===
===Membership===


There are three possible reasons for joining a Justice Web. You could either be:
Membership of a Justice Web would be primarily public-facing services seeking protection from attacks. However, because there is the capability of sharing resources based on a node’s MR, there is reason for computers to join the network simply for accessing to these resources.
* A server looking for protection from illegitimate traffic and other computers with dispositions of malcontent.
 
* A client looking for protection from phishing attacks and network viruses. (Though this might contradict with the Slave-Master List concept)
==Global Implementation==
* A server or client looking for access to distributed resources within the network.
===Overview===
Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term ''Internet''). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:
 
*'''Where should the master morality list be stored?''' - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host).  


When joining a Justice Web, a computer retains its MR from when it was outside of the network. In other words, you can't reset your score just by joining the network.
*'''How are judges elected?''' - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exist. For example, the United Nations (UN) and the North Atlantic Treaty Organization (NATO) are organizations where countries participate in so-called "global councils". Generally in these types of councils, each participating member country appoints one or more people to represent the country's interests in the council.


===Jurisdictions===
Due to these restrictions, we do not believe there is a possible incrementally deployable implementation such as the Justice Web, where hosts opt-in. This section briefly discuss a different approach to the Justice Web that attempts to deal with some of the restrictions mentioned above, at the expense of losing incremental deployability.


As a Justice Web grows, it would become infeasible for a ring of judges to handle every single claim. If the network is sufficiently large, it would make sense to implement hierarchy so that the less severe claims are handled by lower rungs. Again, this would be implementation specific, but it would especially make sense in the case of multiple Justice Webs joining or forming some sort of alliance.
=== Morality Rating ===


==Global Implementation==
The global implementation still requires the existence of a morality rating, but in a global setting, we require that all hosts have a morality rating built-in. By having each host store its own morality rating, we obsolete the concept of a "master list" or a "slave list" or morality ratings. The obvious requirement for a built-in morality rating is that the host itself should not be able to arbitrarily modify the value. One possible mechanism could be the use of a Trusted Platform (http://www.trustedcomputinggroup.org/developers/ TPM]) which allows encryption and decryption of data, but noes not allow the extraction of the private encryption key. Indeed, storing the morality rating within hosts rather than on external lists alleviates the need for distributed storage and allows better scalability, but also requires all hosts to be compliant with the mechanism.
Extrapolating the concept of the "local justice web" to a multi-network environment is not trivial. The Internet as we know it today is built by millions of interconnected local networks. If we attempt to replicate the properties of the local network at a larger scale, we notice a few important issues:
 
=== Connection management ===
 
Due to the modified morality rating storage, there is no longer the need to look-up the morality rating of a host upon incoming connections. We therefore need a way to transmit the morality rating on each outgoing connection, so that the destination host (i.e., the server) can decide whether or not to allow the connection. A change of this type would mean changing underlying networking protocols to include a new field (the morality rating). If morality ratings are stored locally and transmitted as part of the network protocol, there would be far less overhead than in the Justice Web.
 
=== Rules and Judges ===
 
Similar to the Justice Web, there would need to be a standard set of rules that all hosts agree to. In the global implementation, agreeing upon a standard set of rules might prove to be difficult, since not all hosts/users at the global level have the same views on justice. The problem of judge election also becomes difficult at a global level. We leave this problem to future research.
 
In summary, the global implementation could offer the same benefits as the Justice Web with much less overhead, but would require a full reboot of the Internet as well a new hardware, making it a realistically unlikely solution.
 
==Use Cases==
 
This section reviews three common attacks and describes how the computer-based justice system would deal with them
 
===Case 1: Comment Spam===
The first deviant act we investigate is comment spam. This type of spam is typically generated by automated scripts which insert comments on blogs or other sites. Posted comments will generally contain links to other websites which attempt to sell a product or trick the user into revealing banking credentials. Although usually annoying, these comments can direct users to locations where malicious code may be downloaded, even if the original site hosting the comment was initially trusted.
 
'''Evidence collected.''' The comment being reported as spam as well as the website hosting the comment (forum, blog, etc.) The ID of the commenter is also collected, assuming we have a unique identifier for each commenting host.


*Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the MR of a given host).
==== Solution ====
'''Local implementation'''


*How are judges elected? - Self-governing entities often have a common set of laws, but not necessarily the same laws as different entities. There are some known cases (UN, NATO, etc.) where countries participate in so-called "global councils".
*Users report comment spam
*The morality of the offending host is adjusted if the evidence is found to incriminate the host.
*Based on the new morality rating, the offending host may not be allowed to post to the site depending on the restrictions of the hosting server


*Other issues.
'''Global implementation'''


To address these issues, we do not believe an incrementally deployable solution is possible.  
*Same method for reporting the comment spam, and for adjusting the morality rating as the local implementation above.
*If a host has a sufficiently low morality rating, the host site will disable the ability for the offending host to communicate with the site at all.


Requires changes a multiple levels (OS, Hardware, network protocols, infrastructure)
===Case 2: Denial of Service===
Denial of service is the act in which a service that is normally available is accessed by a large number of hosts, or a small number of hosts with high frequency. Services under a denial of service (DoS) or distributed denial of service (DDoS) attack are no longer able to serve legitimate requests [14].


Requires a complete reboot of the internet.
'''Evidence collected.''' ID of any host connecting to the victim server for the duration of the attack.


Advantages:
==== Solution ====
'''Local implementation'''


Disadvantages:
*The morality of each user is looked up to see if the request should be managed. This may cause even greater load on the host.
*Once it has been established that participants of the attack have an unacceptable morality rating, they are blocked from communication with the site.


==Use Case Investigation==
'''Global implementation'''


===Case 1: Comment Spam:===
*Since morality rating is passed in with communication, requests could be filtered out (i.e. at a firewall level).
The first deviant act we will investigate is comment spam. The type of spam we are focusing on are usually automated scripts which insert the same comment, usually including links to other destination websites, on forums of public sites. Although usually annoying, these comments can direct users to locations where malicious code may be introduced to unsuspecting users who trust the content of the original site where the forum was hosted.  
*Any incoming communication with a bad enough morality would simply be ignored.


====<u><i>Local Implementation Solution:</i></u>====
===Case 3: Phishing===
* provide details
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a malicious site pretends to be a legitimate site, tricking users into revealing banking or personal information.


====<u><i>Global Implementation Solution:</i></u>====
'''Evidence collected.''' The fraudulent site URL and the legitimate site URL
* provide details


===Case 2: Denial of Service:===
==== Solution ====
Denial of service is the act in which a service that is normally available is being accessed by either numerous ip's through ip spoofing or a distributed DoS attack which has multiple valid ip's, which in turn, bog down the system because the service capacity is being constrained. By either maxing out or flooding a service by multiple requests, a system will either be shutdown or very difficult to use, regardless, the outcome is a denial of that service. We have come up with some solutions to how to punish the computers that participate in such an attack, voluntary or not.[14]


====<u><i>Local Implementation Solution:</i></u>====
'''Local implementation'''
* provide details


====<u><i>Global Implementation Solution:</i></u>====
*Users report phishing site.
* provide details
*Based on the morality of the host of a phishing site, it may be removed from the network.


===Case 3: Phishing:===
'''Global implementation'''
Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a user visits a website and enters information/clicks on a link expecting the site he/she is visiting to be secure. Unfortunately the site they have been directed to has been designed to imitate the appearance and behaviour of the site the user wishes to visit, but it is not actually the real webpage. As a result, a user may expose private information to parties that are not legally entitled to view the data. Although it is not illegal for the "fake site" to exist, the usage of the site is an act that must be addressed by the justice system.


====<u><i>Local Implementation Solution:</i></u>====
*Same method of reporting and morality adjustment as the local implementation above.
* provide details
*Removal from the network is not really possible, but the client can read the server's morality rating upon connecting


====<u><i>Global Implementation Solution:</i></u>====
=Conclusion=
* provide details
Applying justice to a distributed system requires an understanding of how society runs in a teleologic or retributive method of punishments, as well as knowing the range between purposely and negligently participating in such an act. Discussions of punishment and intent brought up another social construct that exists in society - morality. When looking at a single computer, it is hard for us to consider that that computer had "intended" on doing something, or even that it had felt badly if we made it do a bunch or repetitive operations as a form of punishment. Even though implementing emotions and a care for self preservation is difficult for a computer, we can at least apply a morality value to each computer node, so that it may be judged by any individual that plans on communicating/interacting with that node. By discussing specific cases in which a justice system would take part in a distributed system, we can conceptualize a basis upon which a future implementation of justice on computers might be possible. Given the advantages and the disadvantages of implementing such a system on a local and global scale, it is evident that there requires a more in depth look into how some technical aspects, as well as the assumptions to be supported by the other factors on the internet(attribution, reputation, contracts), must be upheld in order to seek the means to fight injustice, and to turn fear against those who prey on the fearful, as malicious users do to users who do not have protection - this is what the justice web is for.


=Resources=
=Resources=
Line 236: Line 267:


[14] Roger.M.Needham, ''Denial of Service'',ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]
[14] Roger.M.Needham, ''Denial of Service'',ACM, New York, USA ,1993,[http://portal.acm.org/citation.cfm?id=168607 PDF]
[15] C.A. Thekkath, T. Mann, and E.K. Lee, ''Frangipani: A scalable distributed file system'', in Proceedings of ACM SIGOPS Operating Systems Review 1997.
[16] S. Ghemawat, H. Gobioff, and S.T. Leung, ''The Google File System'', in Proceedings of the ACM SIGOPS Operating Systems Review. 2003
[17] S. Yu, W. Zhou, R. Doss, ''Information theory based detection against network behavior mimicking DDoS attacks'', IEEE, April 2008, [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&tag=1]. Last visited April 2011.

Latest revision as of 20:50, 11 April 2011

Members

  • Matthew Chou
  • Mike Preston
  • Thomas McMahon
  • David Barrera

[Presentation]

Abstract

The goal of this article is to investigate the feasibility of implementing a system of justice on a distributed computing environment. Although directly applying human concepts related to justice, for example intent, is not possible in the realm of computers, we can use these concepts to construct a justice system that helps maintain the stability and efficiency of the distributed environment. To provide this functionality, the justice system requires a reporting mechanism as well as a mechanism for guaranteed attribution of transactions such that members of the distributed society can flag deviant behaviour and proper punishment may be exacted based on collected evidence.

This article is divided into two main sections; the first is a discussion on theories of human justice and how concepts from the human example may be used within the scope of computers. The second section describes the components necessary to create a justice system for a distributed computing society and how these components would be used in four malicious acts; comment spam, denial of service, and phishing attacks.

Can Justice be Implemented on a Distributed Computing System: Discussion

In this section we present definitions of human justice and punishment. It is important to understand these concepts so that we can use them as a template to create a system of justice for a distributed computing environment. This section also includes definitions of key concepts related to justice and how they relate to potential justice in the realm of computers.

What is Justice?

Theory of Justice

John Rawls provides a definition of the purpose of justice as providing two primary functions; first, justice assigns rights and duties for the basic institutions of society, and second, justice describes the best way to distribute the benefits and burdens of society. [1] Essentially justice must ensure that a society is able to operate efficiently and with sufficient stability. This view fits well within the scope of justice for a distributed computing system as there are very clear roles which can be assigned and there are finite resources which can be used to manage the “benefits and burdens” of society.

In order for a society to uphold justice it must possess the ability to punish those who behave in a deviant manner. From a philosophical point of view, there are three main categories of punishment; Teleologic, Retributive, and Teleologic Retributive. [3]

Teleologic View of Punishment:

The teleologic view of punishment is that any punishment should always be accompanied by some beneficial effect. Even though the act of punishing someone may itself be considered “evil”, the overall punishment will be considered “good” if it provides some form of social benefit to the society. For example, if a criminal is punished for a crime and this punishment serves to dissuade potential criminals from committing future crimes, then the overall social value of the punishment is positive.[3]

Punishment from this perspective provides a good model for computer systems as any criminal act will be handled such that punishment is beneficial for the system. A simple example of such a transaction can be visualized through the management of bandwidth. If there is a particular computer who is deemed to be a criminal bandwidth hog, using more resources than it is allowed, the perpetrator’s network connection may be throttled. This punishment would correct the deviant computer while free the resources for other computers in the system to utilize.

Retributive View of Punishment:

Retributive punishment is defined by the belief that punishment itself is either just or intrinsically valuable even if there is social benefit to the punishment. This view is probably best characterised by the phrase “eye for an eye”. Essentially punishment is dispensed because it is necessary to inflict harm on those who do bad things but society may not get any benefit from the punishment.[3] The point of view of retributive punishment is that it is better to punish someone who commits a crime regardless of the severity of the punishment.[2]

It is also important to discuss retributive punishment in comparison to retaliation. Although they both incorporate the concept punishment as a just, and necessary, act, they have very different goals. Retribution focuses on the wrongdoing of the criminal whereas retaliation is based on the right of the victim to seek punishment. Retaliation is based on the concept of deterrence; if you are convicted of a crime then someone will get to exact revenge and thus you will pay a price. Retribution requires a criminal to pay a price for the crime committed and thus he should internalize how the crime has a negative effect on society. [1]

Although the necessity of punishment is certainly true, it is hard to see a situation where truly retributive punishment is beneficial for a computer system. Since computers on a distributed system are sharing finite resources, inflicting punishment on criminal computers, without considering the benefit/harm to the society, may result in further negative effects on the system. If a crime has been committed, the effects of that crime have already had an effect on the computing system. Punishing the perpetrator of the criminal act will not reverse the effect of the act, and it may adversely affect the system.

For example, suppose a computer is caught conducting spam attacks on other computers and the punishment for this act is to remove the computer from the network. It may be the case that the criminal computer had previously provided a very efficient connection to some data set but now there is no way to communicate with this computer. As a result, members of the remaining network must use a less efficient connection to reach the same data, thus the punishment had a negative effect on the system.

Teleologic Retributive View of Punishment:

This third view of punishment combines the concept of the need to punish within the limits of what is considered reasonable punishment for the crime. From this perspective, punishment is necessary and provides a valuable service to society but it is only enforced within acceptable limits.

To illustrate this view of punishment, consider the spam attack example from the retributive section above. If the punishment is reduced from being removed from the network to simply blocking a specific type of communication originating from the criminal computer, then it would be considered a teleological retributive punishment. This new punishment would match the severity of the crime but also still allows the other computers on the network to utilize the efficient network path through the criminal computer.


Structure of Punishment

To maintain a stable and efficient distributed system, punishment requires structure, or more accurately, there needs to be some power imbalance designed within the system such that some computers can hand out punishments upon other, criminal computers. Here we will briefly discuss a few methods which may be used to implement a penal system into a society.

Sovereign Rule:

In the 1600s, Thomas Hobbes wrote a dissertation on how government and society should be structured. Within this work Hobbes discusses how punishment should be handled by a sovereign ruler. In this system, there is a known set of laws which originate from a single entity which exists above the law. This sovereign ruler is the highest authority of the law, but he may assign lesser judges who may carry out punishment in accordance to the laws.[4]

In this system, breaking the law is never excusable as the law is known to all members of the society. The exception to this are any members of society that are without reason, for example “children and madmen”. Punishment is a necessary evil and the sovereign has the right to punish any criminal in order to protect the “commonwealth”. The sovereign can even order other subjects to punish criminals but he may not order a criminal to punish himself as this violates the law of self preservation. To balance the system the sovereign may also reward individuals and thus the balance of punishment and reward are the “nerves and joints which move the limbs of a commonwealth.”[4]

Essentially, sovereign rule is one overall leader of justice who determines what is right and wrong in order to best serve the needs of a system.

Corporal Punishment. Economic Punishment, and Prison:

Human punishment commonly falls into three overlapping categories; corporal punishment, economic punishment, and prison. Corporal punishment involves inflicting pain or possibly disfiguring a criminal in response to the crime committed. The main idea is that the criminal should serve as a demonstration of the terrible things that befall those who break the law. Furthermore, any criminal who is disfigured must live with a visual reminder of the act they committed, thus imparting shame upon the perpetrator and allowing others in society to form a conceptual model of the type of person that individual is.[5]

Economic punishment is forcing a criminal to pay a fine for the act committed. The main idea is to make criminals internalize the social costs of the crime they committed. The penalty fine imposed upon the criminal may not be equal to the social cost of the crime committed but it should cause the criminal the same amount of distress as the crime that was committed. [1] Prison is a modern method of punishment by which criminals are forced to exist under the watch of professionals and it is up to the discretion of the professionals as to when the punishment is complete. For example, it is up to lawyers, judges, psychologists and prison guards determine when a criminal’s prison sentence has ended.[5]

These three methods are not mutually exclusive as commonly criminals may be asked to pay a penalty fine as well as serve a prison sentence; however they all serve different purposes. All three punishment types serve as a deterrent to future criminals but each method has a different active agent; corporal punishment uses shame, economic punishment uses monetary handicapping, and prison focuses on reducing personal freedoms. These three concepts may be very useful to a distributed computer justice system.


Addition Concepts Related to Justice

Morality:

For a system of justice to be effective, a known moral code must exist within the society. Friedrich Nietzsche provides one interpretation of morality based on social position which is divided into two categories; “master-morality” and “slave-morality”. Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic. Slave-morality, on the other hand, is based on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.[6]

Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more "good" than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam attacks) then those computers would be considered as morally "bad". Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don't care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.

If morality was introduced to a distributed computer system that already has a reliable reputation mechanism, then all computers would be able to know how other computers behave "socially”. This would further allow punishment methods based on shame, to be exacted based on how "bad" a computer’s moral code is. An offending would then have to rebuild a positive moral reputation before it could participate in more trusted social interactions.


Intent:

Mens Rea - state of the mind:

It is said that a crime consists of two elements, the actus reus, and the mens rea. The actus reus defines the action of the crime, and the mens rea defines the mental state. The mental state of a person is highly regarded as being relevant to the punishment of crimes and the Model Penal Code (“MPC”) is used to categorize the mens rea into four levels: purposely, knowingly, recklessly, and negligently. These levels rank from being the highest is acting upon a crime on purpose, and the lowest is being part of a crime negligently. An example of such a case would be when one distinguishes between whether a car hitting someone had been done intentionally, or by accident.[7]

For a computer, there is no such thing as "intent", there is only computation. As such, discovering the intent of a computer is a meaningless task. To handle this issue a system of justice for computers can take one of two approaches; it can attempt to discover the intent of the user of the computer before distribution of punishment, or it can punish all perpetrators of an act with the same severity regardless of intent. If a computer justice system attempts to discover the intent of the human operating a computer then this system must involve a human who can decipher human reason. This would create a justice system that would have a bottleneck at the human investigation point. For our purposes, we propose that all perpetrators of a deviant act are punished with the same severity to prevent the need for human interventions/investigations.

Computer Fraud and Abuse Act:

The digital age has brought on many new kinds of crimes on the Internet in computers. An example of preventative measures for these crimes was created in 1984 by the United States of America’s congress called the Computer Fraud and Abuse Act(“CFAA”). This criminal statute was built under the ideas of mens rea, and the MPC. After being first implemented, there has had to be many changes to it because of unspecific instances of how different crimes were categorized based on the mens rea. The change between “knowingly” and “intentionally” doing an act would change in degrees of punishment as well as accessing a system and damaging a system had to be more specified over time.[8]

One well know case is of Robert Tappan Morris, who was a 1st year graduate student in Cornell University who attempted to demonstrate the inadequacies of current security measures on the computer networks(INTERNET) by releasing a worm virus. The virus had propagated faster than he had intended and attempted to release the instructions on how to kill the worm, but it had been too late, and many computers across the INTERNET had been affected. The government had to try and prove that it was his intent to access unauthorized computers, which he did, and they also tried to prove that it was his intent to damage the machines, but at that point damaging machines had no category in mens rea.[9]

Justice Involving Computers

Applying justice to computers

The first issue arises from the discussion of the mens rea. Some might say that the computer executes commands that is inputted by the user, so this would mean that everything that the computer does, must be on purpose, because it is just following instructions. This may be true except in the following case that an error has occurred in the system, or a bit has gone missing and the address to which sensitive information was to be sent has now been sent to an incorrect address. If this error created many losses to some entity, would the user be blamed for this error or would the computer be blamed for negligently sending to the wrong address? This type of situation seems similar to how humans may be charged for killing someone, the difference between murder and manslaughter is the intent.[12] With the current structure of how computers are set up, it is difficult to map a mens rea scheme to it’s inner workings. (*perhaps if the computer was running some genetic programming to create a program which it deemed good, and then intentionally used it, then that intent to use differs from it continually creating new programs until it decides one is suitable)

Assuming that the state of mind of a computer has been decided, the next thing to consider would be how would one prevent a computer from doing malicious actions. Attempting to follow in the footsteps of the general deterrence theory, we would try to instill some sort of fear of consequences/shame that would come from causing malicious actions. The problem with this approach to justice is difficult because of the idea that computers do not have feelings, and doing any kind of work such as word processing to a denial of service attack would be equal in aspects to what it prefers to do. Deterring possible criminals only works if they are afraid of the consequences and can not accept the ratio of profit over penalty that they will procure from a malicious act. If a punishment for a computer was to execute many functions for a long period of time, the computer itself would not care any less if it were doing those functions or standing by idle. However, for the human that might force it’s computer to do such malicious actions may be deterred from doing so because of the consequences that might follow from the law, or the possible performance drop from his own computer. The penalties set in place are currently only going to affect a human, whether it be sentenced to jail or confiscation of the physical computer, the human aspect of the problem will be removed from the computer element. Had it been that the computer itself was the only one punished for such malicious actions, then it would not prevent further malicious actions from occurring on a computer network by the human user using another computer terminal.[13]

Since a distributed system would want to one day grow to a global scale, the laws and punishments can not be enforced in a legal sense because of jurisdiction issues, therefore, the implementation of a new system must be done so that computers on the system will be deterred from malicious actions. Implementing a morality system that has every node on the system with their personal morality rating will allow for nodes to communicate with other nodes based on how low or high the rating is. From lowering in morality because of malicious actions and raising it by being more helpful to the system will allow for computers to "care" for who they are communicating with, and to also feel shame for when their morality is so low that they can barely communicate with others.(Lowest level might equal to expulsion) Based on this simulated feeling of care and shame, this might allow for a justice system to be implemented onto computers.

Possible Implementations

Designing a complete justice system implementation is far beyond the scope of this project. This does not mean it is not important to describe the important features that a fully functional system would require and outline the potential benefits and shortcomings of the system. In fact, we were unable to come up with one unique system that would be feasible; instead we propose two potential implementations each with its own advantages and downsides. Although the implementations have different (and somewhat mutually exclusive) modes of operation, they both take a teleologic-retributive approach to justice. This means that punishments are viewed as necessary but they are imposed immediately if doing so would result in negatively impacting the performance or stability of the overall system.

The remainder of this section details the two justice system implementations at a high level and describe how each system could handle three deviant behaviour scenarios; Comment Spam, Denial of Service Attacks and Phishing.

Local Implementation (Justice Web)

Overview

The implementation described in this section takes the above discussion involving justice, and applies it to the management of a computer network. The implementation is designed to be incrementally-deployable, so that it would be realistic for a network to use the proposed system. The implementation is entitled the “Justice Web”.

The purpose of the Justice Web is to protect public-facing services from attacks coming from outside the network. This is accomplished by keeping a record of the criminal acts made by connections, and allowing the services access to these records. Criminal acts in this case are actions done by a connection that is considered harmful to the network. The record kept by the network is a “Morality Rating”, which is an integer meant to reflect the severity of the crime committed.

Assumptions

Certain assumptions must be made regarding the other class projects in order for this implementation to be deployable. Most importantly, it is assumed that there is some way in which the network can uniquely identify a computer that connects to the network. This allows the Justice Web to keep a criminal log of clients, and recognize if an offender is attempting to connect.

Morality Rating

Morality Rating (MR) is an integer assigned to computers that have connected to a service within the Justice Web. The purpose of the MR is to keep track of a computer’s past offenses, and allow services to restrict access using thresholds. For instance, a service within the Justice Web could restrict access to those above -100 MR.

While the primary purpose of the Justice Web is to protect against attackers from outside the network, every node in the Justice Web is assigned an MR, which increases and decreases based on their actions within the network. Ideally, those with higher MR are allowed access to more shared resources, though this would be implementation specific.

The MR assigned to a computer is local to the Justice Web that assigned the rating. For example, if two separate networks deploy a Justice Web, the ratings they assign do not affect the other network’s ratings.

Judges

In order to assign MR to offenders, an authority figure is needed to declare if a crime has been committed. In the Justice Web, this role is taken by the Judges, who may be one or more computers within the network. It is the Judges’ responsibility to create the rules of the network, gather the evidence when a claim is made, declare if a crime has been committed, and assign a new MR based on the ruling.

How a Judge is picked isn’t set in stone, but in general it would be the node(s) in the network with the highest MR. Alternatively, the Judges could be picked through some democratic process.

The judgments made are mostly automated, based on the rules of the network. However, it can be specified that certain crimes, such as a claim of a phishing scam being committed, be dealt with by a human.

Master List

The Justice Web is a virtual network, in that the nodes are not necessarily connected or even anywhere near each other. Because of this, it would be inconvenient and potentially harmful to have services look up a computer’s MR on every connection attempt. To prevent this, MR will be stored in a central location, but propagated throughout the network.

This is done using a master-slave approach to database replication. The Judges of the network store the “Master List”, and propagate the data to the “Slave Lists” stored by the services within the network. The records stored by the Slave Lists is decided by the thresholds that the specific service has put in place. As mentioned in the Morality Rating subsection, a service can set thresholds to determine if a computer should be allowed access. In the example, an MR of -100 would be blocked from the service. If a service were to have only this threshold in place, it would only need to be aware of computers with -100 MR, and so would only store that data in its Slave List. Other approaches to distributing the list could leverage existing research in distributed file systems [15, 16].

Rules

Judges define and use rules to determine whether a crime has been committed. A rule consists of three parts: The offense, the proof needed, and the severity of the punishment. The offense is a name assigned to the crime, which services can claim has been committed. The proof is the required information for the judges to be able to make a conviction. The severity of the punishment is an integer value to negate from the offender’s current MR.

Each network deploying a Justice Web specifies their own set of rules. These rules are made available to the public so that services within the network are aware of the crimes they can report. This is akin to a human justice system, where everyone under that legal system can see what actions constitute a crime (e.g., the criminal code of Canada).

Evidence

Evidence is used by the Justice Web to determine if a crime has been committed. Evidence is stored in encrypted logs located on a service’s computer, and submitted to the judges when a claim is made.

Evidence logs are required to prove the occurrence of a rule violation. The Justice Web therefore requires hosts to keep logs of recent network (e.g., packet captures) and application layer activity (e.g., web server logs). We require these logs to be digitally signed or encrypted to ensure that the computer making the claim or any other system in the chain of custody does not tamper with evidence. When evidence is received by judges, the logs are decrypted and reviewed. The type of evidence required is varied, and is defined by the Judges of a network. For a DDoS attack, the Justice Web would potential be able to look at the evidence logs and determine which computers were actively involved in the attack, and which was legitimate traffic through the analysis of statistical evidence[17].

Membership

Membership of a Justice Web would be primarily public-facing services seeking protection from attacks. However, because there is the capability of sharing resources based on a node’s MR, there is reason for computers to join the network simply for accessing to these resources.

Global Implementation

Overview

Extrapolating the concept of the local Justice Web to a multi-network environment is non-trivial. The Internet as we know it today is built by millions of interconnected local networks (hence the term Internet). If we attempt to replicate the properties of the local Justice Web at a larger scale, we notice a few important issues:

  • Where should the master morality list be stored? - Distributed storage at a global level is possible, but is subject to tampering or simply denial of service (refusal to respond with the morality rating of a given host).
  • How are judges elected? - Self-governing entities often have a common set of laws. However, these laws are not necessarily the same laws as different self-governing entities. In the real world, cross-jurisdiction legal systems are known to exist. For example, the United Nations (UN) and the North Atlantic Treaty Organization (NATO) are organizations where countries participate in so-called "global councils". Generally in these types of councils, each participating member country appoints one or more people to represent the country's interests in the council.

Due to these restrictions, we do not believe there is a possible incrementally deployable implementation such as the Justice Web, where hosts opt-in. This section briefly discuss a different approach to the Justice Web that attempts to deal with some of the restrictions mentioned above, at the expense of losing incremental deployability.

Morality Rating

The global implementation still requires the existence of a morality rating, but in a global setting, we require that all hosts have a morality rating built-in. By having each host store its own morality rating, we obsolete the concept of a "master list" or a "slave list" or morality ratings. The obvious requirement for a built-in morality rating is that the host itself should not be able to arbitrarily modify the value. One possible mechanism could be the use of a Trusted Platform (http://www.trustedcomputinggroup.org/developers/ TPM]) which allows encryption and decryption of data, but noes not allow the extraction of the private encryption key. Indeed, storing the morality rating within hosts rather than on external lists alleviates the need for distributed storage and allows better scalability, but also requires all hosts to be compliant with the mechanism.

Connection management

Due to the modified morality rating storage, there is no longer the need to look-up the morality rating of a host upon incoming connections. We therefore need a way to transmit the morality rating on each outgoing connection, so that the destination host (i.e., the server) can decide whether or not to allow the connection. A change of this type would mean changing underlying networking protocols to include a new field (the morality rating). If morality ratings are stored locally and transmitted as part of the network protocol, there would be far less overhead than in the Justice Web.

Rules and Judges

Similar to the Justice Web, there would need to be a standard set of rules that all hosts agree to. In the global implementation, agreeing upon a standard set of rules might prove to be difficult, since not all hosts/users at the global level have the same views on justice. The problem of judge election also becomes difficult at a global level. We leave this problem to future research.

In summary, the global implementation could offer the same benefits as the Justice Web with much less overhead, but would require a full reboot of the Internet as well a new hardware, making it a realistically unlikely solution.

Use Cases

This section reviews three common attacks and describes how the computer-based justice system would deal with them

Case 1: Comment Spam

The first deviant act we investigate is comment spam. This type of spam is typically generated by automated scripts which insert comments on blogs or other sites. Posted comments will generally contain links to other websites which attempt to sell a product or trick the user into revealing banking credentials. Although usually annoying, these comments can direct users to locations where malicious code may be downloaded, even if the original site hosting the comment was initially trusted.

Evidence collected. The comment being reported as spam as well as the website hosting the comment (forum, blog, etc.) The ID of the commenter is also collected, assuming we have a unique identifier for each commenting host.

Solution

Local implementation

  • Users report comment spam
  • The morality of the offending host is adjusted if the evidence is found to incriminate the host.
  • Based on the new morality rating, the offending host may not be allowed to post to the site depending on the restrictions of the hosting server

Global implementation

  • Same method for reporting the comment spam, and for adjusting the morality rating as the local implementation above.
  • If a host has a sufficiently low morality rating, the host site will disable the ability for the offending host to communicate with the site at all.

Case 2: Denial of Service

Denial of service is the act in which a service that is normally available is accessed by a large number of hosts, or a small number of hosts with high frequency. Services under a denial of service (DoS) or distributed denial of service (DDoS) attack are no longer able to serve legitimate requests [14].

Evidence collected. ID of any host connecting to the victim server for the duration of the attack.

Solution

Local implementation

  • The morality of each user is looked up to see if the request should be managed. This may cause even greater load on the host.
  • Once it has been established that participants of the attack have an unacceptable morality rating, they are blocked from communication with the site.

Global implementation

  • Since morality rating is passed in with communication, requests could be filtered out (i.e. at a firewall level).
  • Any incoming communication with a bad enough morality would simply be ignored.

Case 3: Phishing

Phishing provides an interesting challenge for a justice system as the deviant act involves a website that is perfectly legal. A phishing attack occurs when a malicious site pretends to be a legitimate site, tricking users into revealing banking or personal information.

Evidence collected. The fraudulent site URL and the legitimate site URL

Solution

Local implementation

  • Users report phishing site.
  • Based on the morality of the host of a phishing site, it may be removed from the network.

Global implementation

  • Same method of reporting and morality adjustment as the local implementation above.
  • Removal from the network is not really possible, but the client can read the server's morality rating upon connecting

Conclusion

Applying justice to a distributed system requires an understanding of how society runs in a teleologic or retributive method of punishments, as well as knowing the range between purposely and negligently participating in such an act. Discussions of punishment and intent brought up another social construct that exists in society - morality. When looking at a single computer, it is hard for us to consider that that computer had "intended" on doing something, or even that it had felt badly if we made it do a bunch or repetitive operations as a form of punishment. Even though implementing emotions and a care for self preservation is difficult for a computer, we can at least apply a morality value to each computer node, so that it may be judged by any individual that plans on communicating/interacting with that node. By discussing specific cases in which a justice system would take part in a distributed system, we can conceptualize a basis upon which a future implementation of justice on computers might be possible. Given the advantages and the disadvantages of implementing such a system on a local and global scale, it is evident that there requires a more in depth look into how some technical aspects, as well as the assumptions to be supported by the other factors on the internet(attribution, reputation, contracts), must be upheld in order to seek the means to fight injustice, and to turn fear against those who prey on the fearful, as malicious users do to users who do not have protection - this is what the justice web is for.

Resources

[1] Posner, Richard A., Retirbution and Related Concepts of Punishment, The Journal of Legal Studies Vol. 9 No. 1, University of Chicago Press, 1980. PDF

[2] Rawls, John, A Theory of Justice: Revised Edition, Harvard University Press, 2003. PDF (preview copy)

[3] Ezorsky, Gertrude, Philosophical Perspectives on Punishment, State University of New York Press, 1972. HTML (preview copy)

[4] Hobbes, Thomas, The Leviathon, first published 1651, republished by Forgotten Books, 2008. HTML

[5] Foucault, Michel, Discipline & Punish: The Birth of the Prison, Random House, New York, 1995. PDF (preview copy)

[6] Nietzsche, Friedrich, Ecce Homo & The Anarchist translated by Thomas Wayne, New York, 2004. PDF (preview copy)

[7] Haeji Hong, Hacking Through the Computer Fraud and Abuse Act, originally published in 24 U.C. DAVIS L. REV. 283 (1998), HTML (part A)

[8] Haeji Hong, Hacking Through the Computer Fraud and Abuse Act, originally published in 24 U.C. DAVIS L. REV. 283 (1998), HTML (part B)

[9] 928 F. 2d 504 - Court of Appeals, US v. Morris, 2nd Circuit 1991, HTML (case file)

[10] Charles F. Horne(1915),Claude Hermann Walter Johns, The Encyclopaedia Britannica, 11th ed 1910-, Ancient History Sourcebook:Code of Hammurabi, c. 1780 BCE, Translated by L. W. King, Paul Halsall March 1998, HTML (Internet History Sourcebook)

[11] Scott D. Sagan, Review: History, Analogy, and Deterrence Theory, The MIT Press, 1991, HTML (book link)

[12]Rollin M. Perkins, A Rationale of Mens Rea, Harvard Law Review, 1939, HTML (book link)

[13] Marquis Beccaria, Of Crimes and Punishments, Translated by: Edward D. Ingraham, Philip H. Nicklin: A. Walker, 1819, HTML (essay translation)

[14] Roger.M.Needham, Denial of Service,ACM, New York, USA ,1993,PDF

[15] C.A. Thekkath, T. Mann, and E.K. Lee, Frangipani: A scalable distributed file system, in Proceedings of ACM SIGOPS Operating Systems Review 1997.

[16] S. Ghemawat, H. Gobioff, and S.T. Leung, The Google File System, in Proceedings of the ACM SIGOPS Operating Systems Review. 2003

[17] S. Yu, W. Zhou, R. Doss, Information theory based detection against network behavior mimicking DDoS attacks, IEEE, April 2008, [1]. Last visited April 2011.