Talk:DistOS-2011W Public Goods: Difference between revisions
Line 87: | Line 87: | ||
Ahhhh. This sounds interesting and a little less controversial than a full replacement. Why a mesh overlay? Well it can provide a slower but extraordinarily robust layer of network communication. We can suggest that having working network irrespective of the actions of ISPs could be a great thing. Additionally higher speed service for luxuries such as video streaming could be payed for though ISPs. The meshes would would be organized in urban centres probably with publicly owned backbones between urban centres. The mesh could be self organizing into "neighbourhoods" potentially with with publicly providing infrastructure linking these "neighbourhoods" together. | Ahhhh. This sounds interesting and a little less controversial than a full replacement. Why a mesh overlay? Well it can provide a slower but extraordinarily robust layer of network communication. We can suggest that having working network irrespective of the actions of ISPs could be a great thing. Additionally higher speed service for luxuries such as video streaming could be payed for though ISPs. The meshes would would be organized in urban centres probably with publicly owned backbones between urban centres. The mesh could be self organizing into "neighbourhoods" potentially with with publicly providing infrastructure linking these "neighbourhoods" together. | ||
Now we discussed trying to have common threads for our essay. Meshes are robust since there are many connections that need to be severed to disable a mesh. Caching can also add to the robustess, the professor suggested that the concept of a cache could be extended to include the concept of caching code as well as data allowing web apps to have survivability even when disconnected from the rest of internet. This really suggests that a highly reliable and robust mesh could keep neighbourhoods or even cities up and running even when disconnected from the rest of the internet. So one common thread could be reliability. | |||
Internet caching also provides an increase in speed arguably the robust but slower mesh could free up ISPs so they could provide even greater speed. |
Revision as of 22:58, 20 March 2011
Tuesday March 1
Key components:
- Distributed File System
- Can use something previously presented in class
- Distributed computation
- Administration
- How much does a person need to contribute to the system?
- How will users submit (small) services they would like to have run?
- How can very large services be established (from idea to implementation)?
Todo for March 8th:
- Find papers on distributed computation and administration
Thursday March 3
- Seek out papers on specific topic of distributed web cache.
- Discussed two other interesting public goods or services.
- Image registry
- DNA registry - found one article that suggests uncompressed the human genome is between 1.5 and 30 terrabytes but more efficient formats exist that bring it down to 1.5GB
http://www.genetic-future.com/2008/06/how-much-data-is-human-genome-it.html
One example paper Improving Web Server Performance by Caching Dynamic Data
Tuesday March 8
Currently our idea of public goods consists of:
- Distributed file storage
- Ceph seems to be an ideal candidate for this
- Distributed Computation
- To avoid a tragedy of the commons situation with both the storage and computation resources, only predetermined, agreed upon computation will take place which has a net benefit to everyone participating. This computation is based on the data stored at each node and the results (metadata) will be stored locally. This computation can be done by using idle cycles, much like BOINC projects.
- Must allow for querying of metadata to allow users to effectively search processed data.
- Maybe we need to also consider the movement of data between client machines to fully utilize available resources. For instance, let's assume that a given machine has a relatively low amount of available storage but a lot of free computation cycles. In this situation, once data has been processed it should be moved to a client machine with available storage and data from a machine with a relatively low a mount of free computation cycles should be moved to the original machine.
- Administration
- Main issue: how are services agreed upon? Once a service is implemented (ie. image store) distributing it getting it running isn't a major issue.
- Maybe users should submit potential services to be run on the stored data and "the system" should decide what to do. This could/should be done based on available computation cycles, amount of generated metadata and the overall "popularity" of the service.
- Papers that seems related to this issue:
- The Case for Distributed Decision Making Systems (old)
- Distributed decision making: a research agenda (old)
- A mathematical framework for asynchronous, distributed, decision-making systems with semi-autonomous entities: algorithm synthesis, simulation, and evaluation
- Distributed Decision Making: A Proposal of Support Through Cooperative Systems
- Discussion
- Why is a public good a good idea?
- What value
- Why is a public good a good idea?
What should be a public good?
- What brings value to everyone?
- Examples: Distributed DNS, Spam Filtering, Policing?
Thursday March 10
I kind of did a big overhaul on our page, moved all of the discussion items from the main page here and all of the main work over to the main page. Our main direction has changed considerably after a few conversations between us and Prof. Somayaji. In general we are moving away from the "how to do things" (implementation) and to the "why is it important/better that certain services are in the public's hands". This new direction is now outlined on the main page. Below is an additional note that I couldn't find a place for on the main page.
We are now thinking in analogy. We are trying to identify public goods that are analogous to real world counter parts. In the real world examples of public goods are roads, parks, police, military, water, sewer.
Internet as a public good: Ostracism and the provision of a public good: experimental evidence
Discussion: Definition of Public Good
- Economic Definitions:
In economics, a public good is a good that is non-rivalrous and non-excludable. Non-rivalry means that consumption of the good by one individual does not reduce availability of the good for consumption by others; and non-excludability that no one can be effectively excluded from using the good [1]
A good that cannot be charged for in relation to use (like the view of a park or survival of a species), so there is no incentive to produce or maintain the good [2]
A good that is provided for users collectively, use by one not precluding use of the same units of the good by others [3]
A good or service in which the benefit received by any one party does not diminish the availability of the benefits to others, and where access to the good cannot be restricted. (Source: Millennium Ecosystem Assessment Glossary ) [4]
In economics, a commodity typically provided by government that cannot, or would not, be separately parceled out to individuals, since no one can be excluded from its benefits. Public goods, such as national defense, clean air, and public safety, are neither divisible nor exclusive. ... http://[www.semp.us/publications/disaster_dictionary.php]
- Overall Take:
A Good that inherently requires some form of centralized authority to manage and provide. Ideally "uncorruptable."
Thursday March 17
I talked to the professor before class and he encouraged committing thoughts to the wiki to help me organize my thoughts. So I am kind of adding this as a play by play.
In class we talked a fair bit about my topic ( infrastructure ) mainly because we want it to not be at odds with the rest of the paper. I had proposed a wireless mesh network and as a potential infrastructure change to help do away with ISPs. Right now ISPs hold a potentially significant amount of power over their customers they can and do shape our packets ( limiting the speed at which we do things ) they also have the potential to deny access to sites or disallow services with packet inspection. Additionally as we recently saw in Egypt they also present convenient choke points to slow or stop the flow of internet traffic. The mesh provides detection from this with it's distributed nature (and it sounds cool).
The rest of the group were concerned about the speed implications of a mesh style network particularly if it resided completely on wireless. I pondered that there are algorithms that can make large scale meshes reasonably efficient but it was pointed out in counterpoint that if one ignores fibre that the mesh would have to be exponentially slower than what we have now.
The professor pointed out that people have done research on large scale meshes and have gotten reasonable efficiency out of them. Additionally he suggested that a mesh in this context could be wired or not but imagine that it's possible, why is it a good thing and what other things might need to change to support it. The professor also pointed out that it wouldn't have to be a replacement for a faster service that could still come from ISPs.
Ahhhh. This sounds interesting and a little less controversial than a full replacement. Why a mesh overlay? Well it can provide a slower but extraordinarily robust layer of network communication. We can suggest that having working network irrespective of the actions of ISPs could be a great thing. Additionally higher speed service for luxuries such as video streaming could be payed for though ISPs. The meshes would would be organized in urban centres probably with publicly owned backbones between urban centres. The mesh could be self organizing into "neighbourhoods" potentially with with publicly providing infrastructure linking these "neighbourhoods" together.
Now we discussed trying to have common threads for our essay. Meshes are robust since there are many connections that need to be severed to disable a mesh. Caching can also add to the robustess, the professor suggested that the concept of a cache could be extended to include the concept of caching code as well as data allowing web apps to have survivability even when disconnected from the rest of internet. This really suggests that a highly reliable and robust mesh could keep neighbourhoods or even cities up and running even when disconnected from the rest of the internet. So one common thread could be reliability.
Internet caching also provides an increase in speed arguably the robust but slower mesh could free up ISPs so they could provide even greater speed.