Talk:DistOS-2011W Public Goods: Difference between revisions

From Soma-notes
Lmundt (talk | contribs)
Aschoenr (talk | contribs)
No edit summary
Line 1: Line 1:
==Personal Notes==
===Web Caching (Andrew)===
===Relevant Papers & Links===
Background stuff
*[http://en.wikipedia.org/wiki/Web_cache Web Cache on Wikipedia]
*[http://en.wikipedia.org/wiki/Proxy_server Proxy Server on Wikipedia]
*[http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges]
*[http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html The Internet Protocol Journal - Volume 2, No. 3: Web Caching]
*[http://www.web-caching.com/welcome.html Web Caching Overview]
*[http://docforge.com/wiki/Web_application/Caching DocForge: Web application/Caching]
Other
*[http://www.acm.org/sigcomm/ccr/archive/1999/oct99/Jia_Wang2.pdf?searchterm=distributed+web+caching A Survey of Web Caching Schemes for the Internet]
*[http://conferences.sigcomm.org/imc/2005/papers/imc05efiles/karagiannis/karagiannis.pdf?searchterm=internet+caching Should Internet Service Providers Fear Peer-Assisted Content Distribution?]
*[http://www.sigmobile.org/mobihoc/2003/papers/p25-nuggehalli.pdf?searchterm=distributed+web+caching Energy-Efficient Caching Strategies in Ad Hoc Wireless Networks] Could tie in with the infrastructure stuff.
*[http://www.cs.utsa.edu/~sdykes/papers/hicss99.pdf Taxonomy and Design Analysis for Distributed Web Caching]
===Misc Notes===
* look into LAN caching. if we proposed a new infrastructure where neighborhoods are networked, distributed caching can be done here.
'''Web Caching as a public good'''
Why would turning web caching into something controlled by the public be a good thing? How could this be done (possible high level implementation options)?
*The ultimate benefit of web caching is realized by keeping popular data close to the user
*what is closer to the end user than their ISP? their neighbours.
*by distributing the cache among people, web requests can be satisfied even closer (and, as a result, even faster)
*possible ways of doing this:
**option 1
***have each person dedicate a certain amount of disk space and cpu cycles to storing and maintaining the cache.
***The ISPs can control the overall placement and tracking of cached data since it is in their interest.
***as users log in and log off, data can be transferred accordingly
***web requests will go to the ISP and the the ISP will determine where the data resides and will mediate a connection between the two users
**option 2
***since there are large incentives for ISPs to do this, they may want to invest in some specialized hardware to help implement this
***this hardware would replace the end user's modem and would include a general purpose processor and some data storage.
***with the ever decreasing hardware costs, a relatively powerful machine could be built (especially on a large scale) for relatively cheap
***since an end user generally leaves their modem on more than their PC, this option would result in greater reliability and aggregated uptime.
'''local and global benefits'''
Local
*data is even closer to the end user = lower latency
*greater amount of total storage = bigger cache
*if an optimal size is found and increasing the cache size does not improve performance (3) then the data can simply be replicated at a higher degree across the network
**decreases impact of users logging off
**also allows for neighbourhood specific, ultra fast caches
global
*if every ISP implemented these neighbourhood caches, the total amount of wasted bandwidth going across the internet will drastically decrease
*after these have been implemented, a cache hierarchy can be impodsed where neighbourhood caches can first talk with each other. If there is still a cache miss, then neighbourhoods of neighbourhoods can satisfy requests
*results in a very large, diverse cache that can satisfy a variety of requests
==Tuesday March 1==
==Tuesday March 1==



Revision as of 19:13, 21 March 2011

Personal Notes

Web Caching (Andrew)

Relevant Papers & Links

Background stuff

Other

Misc Notes

  • look into LAN caching. if we proposed a new infrastructure where neighborhoods are networked, distributed caching can be done here.

Web Caching as a public good

Why would turning web caching into something controlled by the public be a good thing? How could this be done (possible high level implementation options)?

  • The ultimate benefit of web caching is realized by keeping popular data close to the user
  • what is closer to the end user than their ISP? their neighbours.
  • by distributing the cache among people, web requests can be satisfied even closer (and, as a result, even faster)
  • possible ways of doing this:
    • option 1
      • have each person dedicate a certain amount of disk space and cpu cycles to storing and maintaining the cache.
      • The ISPs can control the overall placement and tracking of cached data since it is in their interest.
      • as users log in and log off, data can be transferred accordingly
      • web requests will go to the ISP and the the ISP will determine where the data resides and will mediate a connection between the two users
    • option 2
      • since there are large incentives for ISPs to do this, they may want to invest in some specialized hardware to help implement this
      • this hardware would replace the end user's modem and would include a general purpose processor and some data storage.
      • with the ever decreasing hardware costs, a relatively powerful machine could be built (especially on a large scale) for relatively cheap
      • since an end user generally leaves their modem on more than their PC, this option would result in greater reliability and aggregated uptime.

local and global benefits

Local

  • data is even closer to the end user = lower latency
  • greater amount of total storage = bigger cache
  • if an optimal size is found and increasing the cache size does not improve performance (3) then the data can simply be replicated at a higher degree across the network
    • decreases impact of users logging off
    • also allows for neighbourhood specific, ultra fast caches

global

  • if every ISP implemented these neighbourhood caches, the total amount of wasted bandwidth going across the internet will drastically decrease
  • after these have been implemented, a cache hierarchy can be impodsed where neighbourhood caches can first talk with each other. If there is still a cache miss, then neighbourhoods of neighbourhoods can satisfy requests
  • results in a very large, diverse cache that can satisfy a variety of requests


Tuesday March 1

Key components:

  • Distributed File System
    • Can use something previously presented in class
  • Distributed computation
  • Administration
    • How much does a person need to contribute to the system?
    • How will users submit (small) services they would like to have run?
    • How can very large services be established (from idea to implementation)?

Todo for March 8th:

    • Find papers on distributed computation and administration

Thursday March 3

  • Seek out papers on specific topic of distributed web cache.
  • Discussed two other interesting public goods or services.
    • Image registry
    • DNA registry - found one article that suggests uncompressed the human genome is between 1.5 and 30 terrabytes but more efficient formats exist that bring it down to 1.5GB
http://www.genetic-future.com/2008/06/how-much-data-is-human-genome-it.html 


One example paper Improving Web Server Performance by Caching Dynamic Data

Tuesday March 8

Currently our idea of public goods consists of:

  • Distributed file storage
    • Ceph seems to be an ideal candidate for this
  • Distributed Computation
    • To avoid a tragedy of the commons situation with both the storage and computation resources, only predetermined, agreed upon computation will take place which has a net benefit to everyone participating. This computation is based on the data stored at each node and the results (metadata) will be stored locally. This computation can be done by using idle cycles, much like BOINC projects.
    • Must allow for querying of metadata to allow users to effectively search processed data.
    • Maybe we need to also consider the movement of data between client machines to fully utilize available resources. For instance, let's assume that a given machine has a relatively low amount of available storage but a lot of free computation cycles. In this situation, once data has been processed it should be moved to a client machine with available storage and data from a machine with a relatively low a mount of free computation cycles should be moved to the original machine.
  • Administration
  • Discussion
    • Why is a public good a good idea?
      • What value

What should be a public good?

      • What brings value to everyone?
      • Examples: Distributed DNS, Spam Filtering, Policing?

Thursday March 10

I kind of did a big overhaul on our page, moved all of the discussion items from the main page here and all of the main work over to the main page. Our main direction has changed considerably after a few conversations between us and Prof. Somayaji. In general we are moving away from the "how to do things" (implementation) and to the "why is it important/better that certain services are in the public's hands". This new direction is now outlined on the main page. Below is an additional note that I couldn't find a place for on the main page.

We are now thinking in analogy. We are trying to identify public goods that are analogous to real world counter parts. In the real world examples of public goods are roads, parks, police, military, water, sewer.

Internet as a public good: Ostracism and the provision of a public good: experimental evidence

Discussion: Definition of Public Good

  • Economic Definitions:

In economics, a public good is a good that is non-rivalrous and non-excludable. Non-rivalry means that consumption of the good by one individual does not reduce availability of the good for consumption by others; and non-excludability that no one can be effectively excluded from using the good [1]

A good that cannot be charged for in relation to use (like the view of a park or survival of a species), so there is no incentive to produce or maintain the good [2]

A good that is provided for users collectively, use by one not precluding use of the same units of the good by others [3]

A good or service in which the benefit received by any one party does not diminish the availability of the benefits to others, and where access to the good cannot be restricted. (Source: Millennium Ecosystem Assessment Glossary ) [4]

In economics, a commodity typically provided by government that cannot, or would not, be separately parceled out to individuals, since no one can be excluded from its benefits. Public goods, such as national defense, clean air, and public safety, are neither divisible nor exclusive. ... http://[www.semp.us/publications/disaster_dictionary.php]

  • Overall Take:

A Good that inherently requires some form of centralized authority to manage and provide. Ideally "uncorruptable."

Thursday March 17

I talked to the professor before class and he encouraged committing thoughts to the wiki to help me organize my thoughts. So I am kind of adding this as a play by play.

In class we talked a fair bit about my topic ( infrastructure ) mainly because we want it to not be at odds with the rest of the paper. I had proposed a wireless mesh network and as a potential infrastructure change to help do away with ISPs. Right now ISPs hold a potentially significant amount of power over their customers they can and do shape our packets ( limiting the speed at which we do things ) they also have the potential to deny access to sites or disallow services with packet inspection. Additionally as we recently saw in Egypt they also present convenient choke points to slow or stop the flow of internet traffic. The mesh provides detection from this with it's distributed nature (and it sounds cool).

The rest of the group were concerned about the speed implications of a mesh style network particularly if it resided completely on wireless. I pondered that there are algorithms that can make large scale meshes reasonably efficient but it was pointed out in counterpoint that if one ignores fibre that the mesh would have to be exponentially slower than what we have now.

The professor pointed out that people have done research on large scale meshes and have gotten reasonable efficiency out of them. Additionally he suggested that a mesh in this context could be wired or not but imagine that it's possible, why is it a good thing and what other things might need to change to support it. The professor also pointed out that it wouldn't have to be a replacement for a faster service that could still come from ISPs.

Ahhhh. This sounds interesting and a little less controversial than a full replacement. Why a mesh overlay? Well it can provide a slower but extraordinarily robust layer of network communication. We can suggest that having working network irrespective of the actions of ISPs could be a great thing. Additionally higher speed service for luxuries such as video streaming could be payed for though ISPs. The meshes would would be organized in urban centres probably with publicly owned backbones between urban centres. The mesh could be self organizing into "neighbourhoods" potentially with with publicly providing infrastructure linking these "neighbourhoods" together.

Now we discussed trying to have common threads for our essay. Meshes are robust since there are many connections that need to be severed to disable a mesh. Caching can also add to the robustess, the professor suggested that the concept of a cache could be extended to include the concept of caching code as well as data allowing web apps to have survivability even when disconnected from the rest of internet. This really suggests that a highly reliable and robust mesh could keep neighbourhoods or even cities up and running even when disconnected from the rest of the internet. So one common thread could be reliability.

Internet caching also provides an increase in speed arguably the robust but slower mesh could free up ISPs so they could provide even greater speed.