Talk:DistOS-2011W Justice

From Soma-notes
Jump to navigation Jump to search

Meetings

Mar 01

Early discussions on how we would define justice:

  • what are the components of justice?
  • should justice involve preventative measure or should it be strictly reactive?

How would evidence be collected and logged?

Discussions on what "punishment" means when referring to computers:

  • What can we do to punish or penalize computers?
  • Does it make sense to punish computers?

Discussions on how human penal systems work:

  • do we want computer justice to be used to dissuade deviant behaviour or should it be used to punish those who have committed "bad" acts?
  • should we implement a system that catches/punishes all bad acts or just punish reported acts?
  • how will we classify deviant behaviour?
    • by the act itself
    • by the results of the act

Discussed how there would need to be some sort of hierarchical justice system with figure heads who manage justice activities for their specific region:

  • collective internet justice: Justice Web or JLA (Justice Link Assessment)
  • each region patrolled by a justice managing unit:
    • Internet Batman (Gotham), Internet Superman (Metropolis), etc.

Divided the task of finding research papers into 3 sections:

  • current ways to "punish" computers (Matthew)
  • ways to collect, log, categorize evidence of inappropriate behaviour (Thomas)
  • human methods of justice, various penal systems in our current and historical societies (Mike)

Mar 03

Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:

  • punishing computers is difficult as computers do not care what task they are given, they just complete computations.
  • punishing people is not really the focus we need as that is what human laws are for.
  • if there is some way to punish a computer, does it make sense to punish computers that are being used for "bad" actions if the owner of the computer is unaware of this activity.
    • does this punishment really have a greater effect on the owner of the computer than the computer itself?

Our new focus is to try and narrow down if the concept of justice actually has a place in distributed computing:

  • determine what purpose justice would serve...why would we have it?
    • if we decide justice is a necessary concept, the focus will become what is a "fair" way to apply punishment for "bad" actions.
    • if justice does not have a useful purpose then we must detail the reason that it is not beneficial.

Mar 08

  • Definition of Justice - Can we separate the computer punishment from the user punishment?
  • Transparency - keeping "rap sheets" on what systems are doing/have done. If you were wrongfully accused for participating in a malicious attack, this can be clarified
  • Punishment - Computational puzzles for fighting unsolicited inbound traffic.
  • Morality rating - Systems get a "moral rating" that can go up or down. Based on this rating, more or less trust can be given to that system.


Capital punishment? Financial sanction or imprisonment are our current way of punishment. they're expensive (maintain databases, keeping state, paying for prisons).

bodily harm - limited time to perform. the fact that they've been punished is visible. losing hands, losing eyes, people can see that. information propagates because the authorities make an example of someone.

maybe the solution is to restrict protocols if you have a low morality rating. for e.g., you can restrict encryption and compression, which means anything you do will be publicly visible.

Mar 10

  • Offender Registration: Global list of morality registered for perusal of other networks.
  • Encrypted logs on client-side
    • Reporting with tangible evidence
  • Compensation for crimes
  • virus notification
  • detrimental to attacker buying a new computer, rather than total prevention
    • assumption that computer can always be identified
  • virus as umbrella group for mobile code
    • active attackers punished differently from passive attackers
  • Research Topics:
    • What is Justice?
      • Mike
    • Justice in terms of computers.
      • Matthew
    • Crime and Punishment.
      • David
    • Justice Web
      • Thomas McMahon

Mar 17

Research Documentation

Virtual Punishment

I am currently reading a part of this book for some details on virtual punishment and a bit of history that this guy wrote about, but not sure if there is much there yet. link --Mchou2 03:29, 3 March 2011 (UTC)

Responsible Computers?

Theory of Justice

Rawls, John, A Theory of Justice: Revised Edition, Harvard University Press, 2003. PDF (preview copy)

  • This book provides a view of Justice that may serve the purpose of distributed computing. Rawls describes justice as serving two primary functions;
      1. Assign rights and duties for the basic institutions of society.
      2. Describe the best way to distribute the benefits and burdens of society.
  • If we take this view of justice, as opposed to a penalty-centric view, then justice may have a place in distributed computing. For our purposes, justice could be the basic guidelines to which all members of a distributed society must conform in order for the system to be stable and efficient. Obviously this view is an "all-in" type approach and may be more difficult to describe in terms of being incrementally deployable.


The Birth of Prison

Foucault, Michel, Discipline & Punish: The Birth of the Prison, Random House, New York, 1995. PDF (preview copy)

  • Foucault's book focuses on how punishment evolved from medevil methods "draw and quarter" to modern prison methods. These two methods of justiceare differentiated by the way in which punishment is carried out. For medevil, or "Monarchical Punishment", the population is discouraged from doing bad acts by the public, and brutal, way that punishment is exacted. The punishments included torture and executions. On the other hand, Foucault discusses "Disciplinary Punishment" where there are people deemed as experts who have power over the perpetrator of a "bad" act and handle the punishment of the individual. An example of this is a prison guard who determines how long a prisoner stays in jail.
  • For a distributed computing system, this provides a couple of ways that justice could be enforced. If we think of the general distributed system as a free zone in which computers can act how they wish but there are laws in place to describe "bad" acts. If a computer is caught and convicted of doing something against the described laws, then the computer could be tortured (forced to provide more resources to other computers), executed (completly removed from the system) or potentially placed under the care of a supervisor computer who will allow the "bad" computer to continue to participate in certain, restricted actions until the professional (supervisor) computer approves of releasing the "bad" computer back to the general system. The supervisor computer may actually be controlled by a human who is trying to resolve the issue on the offending computer.
  • Another concept worth investigating is that of Foucault's "Panopticon" which is a prison in which everything can be seen. This can also be extended from the strictly prison sense to the level of daily interactions between people and the idea of shame. Most rules are followed because of the knowledge that those around you will see what you have done and their view of you will change, you will have a social stygma. If this is adopted by the computers, through some reputation mechanism, then maybe distributed computing relationships could be formed and altered based on the actions conducted by individual computers.


Ecce Homo & The Anarchist

Nietzsche, Friedrich, Ecce Homo & The Anarchist translated by Thomas Wayne, New York, 2004. PDF (preview copy)

  • If we were going to add shame/stygma to computers, there would need to be some mechanism to manage what is good and what is bad. Nietzsche's work could provide a basis for this computer moral code as he describes two different forms of morality based on two different social position: "master-morality" and "slave-morality".
    • Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic.
    • Slave-morality is split on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.
  • Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more "good" than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam) then those computers would be considered as morally "bad". Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don't care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.
  • If this morality was tied to the reputation component, then all computers would be able to know how other computers "socially" behave. This would further allow punishment methods, as described in the above Foucault section, to be handed out based on how "bad" a computer is and the affending computer can only be released when it's morality is deemed appropriate by the supervising (professional) computer.

Crime and Punishment

This is just a little placeholder for some thoughts before I post them to the main page.

Limiting capabilities Anil mentioned in class the possibility of revoking or limiting capabilities if a user/computer has been found to be guilty of a crime. For example, the computer could somehow lose its ability to perform encryption or secure communications. Somewhat related is the idea of cpu-throttling by performing additional work (explained in the section below).

Proof-of-Work There has been a lot of research done in the area of computational puzzles to fight spam. The idea is that there is currently very little cost associated with sending spam (much less than .01c per email), so we want to make it a bit more "expensive" for spammers to achieve their goal. One solution is to have any email-sending computer perform some type of computational puzzle every time an email is sent. The result of the computation is appended to the email and can be verified by the recipient. One example is to find a string that when hashed gives a result smaller or larger than a specific value. You can statistically predict how long such a computation would take, and you could tweak it to be some particular value (10s, 1m, etc).

I see this as being related to justice, because each self-governing entity can set up these proof-of-work requirements and adjust the difficulty for "trusted" entities and "untrusted" ones. The difficulty can also be increased for entities that misbehave, resulting in a kind of punishment. These punished systems would have to do more computation (e.g., 10m, 1hr) before they're allowed to communicate with someone else.

I have some ideas on how you could technically do this, which we can discuss in class. And now some links:

https://tools.ietf.org/html/draft-jennings-sip-hashcash-06#page-6 http://research.microsoft.com/en-us/projects/pennyblack/spam-com.aspx

Unique identifiers How are machines identified? Although this problem is related to attribution and there's another team working on it, we can make some basic assumptions that each machine is identifiable. This identifier should be able to survive a reformat, but buying a new machine would get you a new identifier. We might argue that this is fine, because all we're trying to do is raise the price an attacker has to pay to commit a crime (i.e., buy more machines).

Gathering Evidence

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4489680&tag=1

Above is a paper that proposes using statistical data to differentiate between legitimate and illegitimate traffic during a DDoS attack. While the paper proposes the statistics to be used for blocking bad traffic, the same logic can be applied to gathering evidence against the attackers of a DDoS. It gets pretty heavy into the statistical analysis, so it'd probably be better to read the paper than me attempting to explain it. Basically, it's meant to detect a DDoS that is purposefully disguised as a traffic flood. This means that Justice can be properly served to malicious computers as opposed to too many computers wanting your resources.

http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01219052

Above is a paper that discusses the idea of computer forensics, and what is needed in order to gather and manage evidence. Although it is meant to be applied to a human level of judgement, computers may be capable of processing this evidence effectively. Logs maintained by routers and local devices may be used as evidence, provided that there be a way to encrypt the data in a way that preserves the original form. It also discusses the challenge of presenting computer related evidence to non-technical jurors, but this is not a concern for computer level management. All that is required for computer forensics to work is additional software being run on select computers to process and preserve any evidence gathered.

CFAA Computer Fraud and Abuse Act

In terms of justice, there has been an act that has specifications to what cyber crimes and of what caliber they should be categorized in. One fundamental idea that is followed is the mens rea, which is defined as the "mental state" of a crime. "The Model Penal Code ("MPC") lists four levels of mens rea -- purposely, knowingly, recklessly, and negligently. The MPC categories range from the highest level, purposely, to the lowest level, negligently. These mens rea levels are further divided into high and low mens rea requirements. The high mens rea levels include acts criminals do intentionally and knowingly. The low mens rea levels include acts criminals do recklessly, negligently, and with strict liability. Criminals have a higher level of mens rea when their intent is more specific; therefore, they are more blameworthy. With these differing mens rea categories in mind, Congress drafted CFAA to address computer crimes occurring on the Internet."[1]

Reading into the decisions they have made to update the CFAA brings up topics of how users who "intentionally" do harm, as well as users unknowingly participating, or even attempting to help the system by hurting it and fixing it (Morris).

Another note is that how there are laws and rules being made for humans to be penalized for such negative cyber actions, but even before penalty, it is important to setup a secure enough system that will try to mitigate such negative actions that can take place.Just as how workers in a business must be educated on detecting malicious software and other vulnerabilities in order to further secure the system. By setting up stand alone protection on each system would prevent the need to punish certain acts since they would be impossible to occur. Law is only part of the answer


Concept: Justice Web

The Justice Web is a possible implementation that uses the research we have done so far. It is essentially an incrementally-deployable network system that shares resources with users within the same Justice Web, based on a morality rating. Evidence is logged so that those within the Web may be held accountable, and those without may be recorded and watched for future misbehaviour.

What it is

The Justice Web is an implementation that treats a network as a distributed system. Resources are shared among the users, based on some measure of trust. As the web grows, more computers become linked to each other within the web, making it harder to manage trust given to each member of the Web. Also, the Web should provide some shared protection for those within the network against external attacks.

Because of this, some sort of Justice System is needed to process evidence and sentence malicious computers. The Justice Web would need a computer or computers to act as the judge. After the judgement, the Justice Web would then need to enforce a penalty on the offender.

What it does

The Justice Web links multiple computers together to act as a distributed system. the amount of resources allotted to a member is dependent on their moral rating and trust. The most trusted computer would possibly be the leader of the Web, acting as the judge. To gather evidence a log is kept at each implementation node. This evidence is encrypted so that the user cannot tamper with it.

The evidence is collected by the Justice Web, and handled by members with high level of trust. That is to say, the high computers within the system would essentially be able to define how much evidence is needed, as well as what punishment is to be handed out. The power high members are not absolute, but are capable of influencing a standard set of rules. Rulings are handled using common law, with a punishment handled in the same way as previous ones, unless explicitly changed by the high members.

After the evidence is processed, and a ruling is made by the high members, the Justice Web must then enforce the punishment. For threats coming from outside the Web, each member of the Web is warned about the offender. Continued communication with the offender will be allowed, but if an infection does occur, the punishment for becoming infected would be more severe.

As for offenders within the system, the morality rating attached to that member is affected, and the amount of trust is decreased. From a practical standpoint, the punishment would involve the restriction of resources accessible to the member, while increasing the workload of the member. The amount of trust increases over time, allowing the member to slowly gain more and more access to resources, but the morality rating would be kept the same so that it is made aware that the member has done wrong in the past.


Prevent DOS by preventing spoofing

Also looking at Ingress filtering is also another good method to prevent users on a network from spoofing ips for DOS attacks. link


Implementation Section Notes

All computers have a unique ID -

There exists some form of morality reputation that is public knowledge. - It should be clear to everyone how you can lose morality. There should also be a clear process to follow in case of disputes.

Website administrators will limit site usage based on the morality/reputation rating.

Morality history. - The current morality value is useful for quick validation. More complicated scenarios (e.g., buying something from ebay) might merit a more detailed explanation of why a computer has a particular morality value, so it should be possible to see the history.

Website administrators will determine what restrictions are imposed on a website based on morality ratings. Different sites may have different penalties based on their own rules.

Components Necessary for a Distributed Computer Justice System

Reporting System

  • needs to be limited so that it can not be spammed....maybe only a certain number of reports can be logged per day, per hour, etc.
  • all users need to be able to report a crime based on the known laws of the distributed system.

Evidence Logging System

Morality Reputation (Rap Sheet)

Local Implementation

  • Each implementation stores an encrypted log of accesses.
  • Using statistics (as described in the network criminology) to create evidence logs to detect such things as DDoS.
  • A leader or group of leaders stores a master list of offenders, using the same style of moral rating.
  • The master list is propagated to nodes within the network, each website using the moral rating in a default or custom manner.
  • Network is used specifically for providers of services. Members of the network can provide shared services based on the network configuration (outside of our scope).
  • The leader or group of leaders would gather evidence logs when an offense is made, and update the master list.
  • network designed for prevention of outside attackers, instead of within.

Comment Spam

Evidence

  • Need to detect the bots and the origin of the spam.
  • should include:
    • the comment itself.
    • a link to the page the comment exists on.
    • the unique ID of the perpetrator.
    • justification of why it is spam.

investigating computer

  • has access to the transaction/communication data of the perpetrating computer.
  • compares the reported message to other ommunications to detect if spam has occurred.

Currently deployed solutions

  • CAPTCHAS - try to detect if the comment submission came from a human or a bot.
  • Filtering - scan for and block specific keywords (pharmaceutical terms, porn terms, etc)
  • Rate limiting - only allow N comments in X time from the same source.

Denial of Service

Use the unique ID to trace back all traffic from the DoS attack to originating machines.

  • if a certain percent of the traffic originates from a single ID (say 60%), then a DoS has occurred.
  • only the computer conducting the DoS is penalized.

Current solutions

  • Blacklists - Don't allow incoming connections from specific IP addresses
  • Routing/configuration - Ingress filtering, ACL, firewalls, syn-cookies
  • Dynamic over-provisioning - When you detect a surge in traffic (legitimate or attack), increase the amount of bandwidth available.
  • Null routing - Refuse to route traffic being sent to the victim for a period of time (upstream)

Phishing

Send the original website link as well as the phishing site link in the report so that the investigating computer can compare.