Difference between revisions of "DistOS-2011W Justice"

From Soma-notes
Jump to navigation Jump to search
Line 6: Line 6:
==Meetings==
==Meetings==


===Mar 01===
===<u>Mar 01</u>===
Early discussions on how we would define justice:
Early discussions on how we would define justice:
* what are the components of justice?
* what are the components of justice?
Line 34: Line 34:
* human methods of justice, various penal systems in our current and historical societies (Mike)
* human methods of justice, various penal systems in our current and historical societies (Mike)


===Mar 03===
===<u>Mar 03</u>===
Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:
Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:
* punishing computers is difficult as computers do not care what task they are given, they just complete computations.
* punishing computers is difficult as computers do not care what task they are given, they just complete computations.

Revision as of 15:10, 6 March 2011

Members

  • Matthew Chou
  • Mike Preston
  • Thomas McMahon

Meetings

Mar 01

Early discussions on how we would define justice:

  • what are the components of justice?
  • should justice involve preventative measure or should it be strictly reactive?

How would evidence be collected and logged?

Discussions on what "punishment" means when referring to computers:

  • What can we do to punish or penalize computers?
  • Does it make sense to punish computers?

Discussions on how human penal systems work:

  • do we want computer justice to be used to dissuade deviant behaviour or should it be used to punish those who have committed "bad" acts?
  • should we implement a system that catches/punishes all bad acts or just punish reported acts?
  • how will we classify deviant behaviour?
    • by the act itself
    • by the results of the act

Discussed how there would need to be some sort of hierarchical justice system with figure heads who manage justice activities for their specific region:

  • collective internet justice: Justice Web or JLA (Justice Link Assessment)
  • each region patrolled by a justice managing unit:
    • Internet Batman (Gotham), Internet Superman (Metropolis), etc.

Divided the task of finding research papers into 3 sections:

  • current ways to "punish" computers (Matthew)
  • ways to collect, log, categorize evidence of inappropriate behaviour (Thomas)
  • human methods of justice, various penal systems in our current and historical societies (Mike)

Mar 03

Initial discussions focused on how we were having difficulty finding papers related to the concept of justice in computers, so we focused on trying to determine exactly what justice should be in the realm of distributed computing:

  • punishing computers is difficult as computers do not care what task they are given, they just complete computations.
  • punishing people is not really the focus we need as that is what human laws are for.
  • if there is some way to punish a computer, does it make sense to punish computers that are being used for "bad" actions if the owner of the computer is unaware of this activity.
    • does this punishment really have a greater effect on the owner of the computer than the computer itself?

Our new focus is to try and narrow down if the concept of justice actually has a place in distributed computing:

  • determine what purpose justice would serve...why would we have it?
    • if we decide justice is a necessary concept, the focus will become what is a "fair" way to apply punishment for "bad" actions.
    • if justice does not have a useful purpose then we must detail the reason that it is not beneficial.

Resources

[1]Rawls, John, A Theory of Justice: Revised Edition, Harvard University Press, 2003. PDF (preview copy)

  • This book provides a view of Justice that may serve the purpose of distributed computing. Rawls describes justice as serving two primary functions;
      1. Assign rights and duties for the basic institutions of society.
      2. Describe the best way to distribute the benefits and burdens of society.
  • If we take this view of justice, as opposed to a penalty-centric view, then justice may have a place in distributed computing. For our purposes, justice could be the basic guidelines to which all members of a distributed society must conform in order for the system to be stable and efficient. Obviously this view is an "all-in" type approach and may be more difficult to describe in terms of being incrementally deployable.

<br\><br\>

[2]Foucault, Michel, Discipline & Punish: The Birth of the Prison, Random House, New York, 1995. PDF (preview copy)

  • Foucault's book focuses on how punishment evolved from medevil methods "draw and quarter" to modern prison methods. These two methods of justiceare differentiated by the way in which punishment is carried out. For medevil, or "Monarchical Punishment", the population is discouraged from doing bad acts by the public, and brutal, way that punishment is exacted. The punishments included torture and executions. On the other hand, Foucault discusses "Disciplinary Punishment" where there are people deemed as experts who have power over the perpetrator of a "bad" act and handle the punishment of the individual. An example of this is a prison guard who determines how long a prisoner stays in jail.
  • For a distributed computing system, this provides a couple of ways that justice could be enforced. If we think of the general distributed system as a free zone in which computers can act how they wish but there are laws in place to describe "bad" acts. If a computer is caught and convicted of doing something against the described laws, then the computer could be tortured (forced to provide more resources to other computers), executed (completly removed from the system) or potentially placed under the care of a supervisor computer who will allow the "bad" computer to continue to participate in certain, restricted actions until the professional (supervisor) computer approves of releasing the "bad" computer back to the general system. The supervisor computer may actually be controlled by a human who is trying to resolve the issue on the offending computer.
  • Another concept worth investigating is that of Foucault's "Panopticon" which is a prison in which everything can be seen. This can also be extended from the strictly prison sense to the level of daily interactions between people and the idea of shame. Most rules are followed because of the knowledge that those around you will see what you have done and their view of you will change, you will have a social stygma. If this is adopted by the computers, through some reputation mechanism, then maybe distributed computing relationships could be formed and altered based on the actions conducted by individual computers.

<br\><br\>

[3]Nietzsche, Friedrich, Ecce Homo & The Anarchist translated by Thomas Wayne, New York, 2004. PDF (preview copy)

  • If we were going to add shame/stygma to computers, there would need to be some mechanism to manage what is good and what is bad. Nietzsche's work could provide a basis for this computer moral code as he describes two different forms of morality based on two different social position: "master-morality" and "slave-morality".
    • Master-morality is split based on good vs. bad, for example, good would be things like wealth, strength, health, and power, while bad is associated terms like poor, weak, sick, and pathetic.
    • Slave-morality is split on good vs. evil, for example, good would be terms like charity, piety, restraint, meekness, submission, while evil terms are worldly, cruel, selfish, wealthy, and aggressive.
  • Although some of these terms make no sense in the realm of computers, others certainly could work as a basis for computer morality. For example, if there were measure of strength, health, wealth based on network and data concepts like bandwidth, latency, data integrity, etc. than certain computers could be more "good" than others. Similarly, if computers were acting selfishly, cruelly or aggressively (say DoS or spam) then those computers would be considered as morally "bad". Based on these moral evaluations, computers could have relationships created or destroyed. Moreover, relationship parameters could be given to computers so that if you don't care how something gets accomplished (whether it is morally good or not) then you could tell your computer to allow less moral interactions to occur.
  • If this morality was tied to the reputation component, then all computers would be able to know how other computers "socially" behave. This would further allow punishment methods, as described in the above Foucault section, to be handed out based on how "bad" a computer is and the affending computer can only be released when it's morality is deemed appropriate by the supervising (professional) computer.