DistOS-2011W Public Goods

From Soma-notes
Jump to navigation Jump to search

Members

  • Lester Mundt - lmundt at connect.carleton.ca
  • Fahim Rahman - frahman at connect.carleton.ca
  • Andrew Schoenrock - aschoenr at scs.carleton.ca

Presentation

As presented April 5, 2011

Main Goals

Based on the discussion last class, what I think the focus of the project should be is what are the kind of things that fundamentally should be a "public good" as opposed to exactly how to implement them. Some of the ideas we have come up with can be used for various implementations but I think, in general, we can rely on previous work for most of (if not all) of the implementation details. If this assumed direction is correct, then I think we should aim to try and answer the following questions:

  • What are good candidates for public goods (ie. DNS, internet cache, physical connections, etc)? Why should these services be fundamentally controlled by the public? What are the flaws in the way they are currently used or why should they not be centrally controlled by a single entity? What incentives are there for a given user to participate (willingly or unwillingly)?
  • What would be the net benefit for the local community participating in these public goods?
  • What would be the net impact on the entire internet if all local communities created these public goods (more secure, less bandwidth wasted, etc.)
  • Could there be disadvantages if so how does the benefits offset these drawbacks?
  • What would the cost of the public goods be? What sort of tax would the organizers need to levy.
  • After identifying some candidates for public goods, try and determine what is the commonality between these services, in problem, in alternative? What are some things that are fundamentally different about these goods?

Note to prof: Please let us know if you have any comments on the overall direction we are taking the project.

Definition

A public good is:

  • A commodity typically provided by government that cannot, or would not, be separately parceled out to individuals, since no one can be excluded from its benefits.
  • A good that is non-rivalrous and non-excludable. Non-rivalry means that consumption of the good by one individual does not reduce availability of the good for consumption by others; and non-excludability that no one can be effectively excluded from using the good
  • A good that cannot be charged for in relation to use (like the view of a park or survival of a species), so there is no incentive to produce or maintain the good
  • A good that is provided for users collectively, use by one not precluding use of the same units of the good by others
  • A good or service in which the benefit received by any one party does not diminish the availability of the benefits to others, and where access to the good cannot be restricted.
  • Resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others

Potential Topics

  • What else occurs on the internet
    • physical infrastructure (phoneline, cable, satellite, etc)
    • DNS, BGP, ----
    • TCP/IP, UDP
    • HTTP, SMTP, POP, IMAP, FTP, SSH
    • email = SPAM, search, internet caching

Note to prof: This is still a working list, but if you notice anything that we should definitely try to cover that we haven't thought of, please let us know.

Current Work

  • Lester - Physical infrastructure
  • Andrew - Caching
  • Fahim - DNS

Candidates for Public Goods (Use this area to post your ongoing work)

Abstract

Public goods are resources that are held in common for the benefit of all. The internet is now such an important piece of our economy, culture and communications that the technologies that enable it to operate should be placed in trust for benefit of the entire population. In this paper we establish criteria to help define public good as they relate to the internet. Using three examples of public goods candidates we illustrate the viability and benefits of this conversion. Finally we establish criteria with which to define other candidates for public goods.

Introduction

As societies have developed communities have recognized the need for public good. From simple shepherds to colonial empires and to current democratic superpowers all societies have recognized a need for and identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others” these public goods also provided a benefit to all the individuals composing the society. Roads, parks, military, police, water, fresh air are all example of public goods.

We propose to add the internet to this long list. (add why it should be) While it might be nice to identify the internet is a public good identifying how to convert it to one is a more difficult process. The internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software.

We have identified three key pieces of the internet and propose how they could removed from being soley in the hands of private companies and converted to the public good.

We have additionally identified several critera than can be used to identify other candidates for the pubic good. (expand)

Physical Infrastructure (Lester)

Introduction

As the ubiquitous nature of the internet has unfolded people's dependence on it has increased. While the internet's roots exist in a serendipitous alignment of academic and military interests the internet quickly became a provider of entertainment and communication. Today the internet has enmeshed itself in the fabric of society and is a part of many people's daily ritual; people are reliant on the internet for business, communication and entertainment. For many, the internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the internet lies in the hands of private companies.

The private companies that currently own the infrastructure of the internet are referred to as ISPs (internet service providers) these are the entities that any user must pay to gain access to the internet currently. And for the purposes of this paper will will consider the servers, routers, switches, hubs, wires, fibre, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the internet and will not differentiate between these technologies.

Problems

A variety of problems arise when the with ISPs owning the infrastructure of the internet. These companies make decisions based on their own profit irrelevant of the public good. One problem currently experienced is packet shaping<ref name="wikipediaTrafficShaping"> Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. link</ref>. Packet shaping is currently used by ISPs control the speed of traffic and thus avoiding congestion and it does this by assigning priorities to packets using varies criteria decided by the ISPs. While it is good for everyone with the technology implemented by private companies we don't know what protocols are limited, by how much, if it's only done at peak times. We don't know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure. Another potential problem is the ISPs giving preferential to websites or webservices that have paid the ISP. This could be implemented by slowing our disallowing traffic to competitors. While this hasn't been proposed by ISPs and it has been fought against and the movement is known as Neb Neutrality<ref name="wikipediaNetNeutrality"> Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. link</ref>. More recently we have become acutely aware that ISPs provide convenient choke points. In Egypt during an uprising the incumbent government shutdown the populations access to the internet by simply forcing the ISPs to shutdown. These are not a conclusive list of weaknesses private ownership of the infrastructure presents there are a host of others but these few are cause for concern.

Alternatives

With the current importance of the internet an alternative to private ownership of the internet's infrastructure needs to be found; we provide two. The first is to have the government legislate the behaviour of the ISPs currently this is our only mechanism. This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the public. There are problems in that politician's have there own goals and can be influenced unduly by private industries though lobbyists and other means. Additionally the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a low preventing the current behaviour. These reasons make this option less than compelling. The other option is for the public to own the infrastructure of the internet. We are not proposing that the government take the infrastructure from the ISPs but that it creates it's own with the help of the people.


  • Problems with private companies having ownership
  • The options
    • Legistlating ISPs ( not a true public good but an attempt to keep it in the public good )
    • publicly owned infrastructure as an overlay ontop of ISPs

Public infrastructure

    • why mesh
      • self organizing, low levels of support, has been used to get into rural ares.
      • can be done incrementally, some towns in the us have been doing this.
      • politically follows current boundaries of municipal, provincial, and federal
    • the benefits
      • basic level of service
      • additional speed
      • additional robustness
      • essential


Future Directions" http://delivery.acm.org.proxy.library.carleton.ca/10.1145/1420000/1410068/p17-johnson.pdf?key1=1410068&key2=4619560031&coll=DL&dl=ACM&ip=134.117.10.200&CFID=14473823&CFTOKEN=63252642

  • A newer paper addressing scalability of adhoc-mesh networks with promising results "DART: Dynamic Address RouTing for Scalable

Ad Hoc and Mesh Networks" http://delivery.acm.org.proxy.library.carleton.ca/10.1145/1250000/1241842/p119-eriksson.pdf?key1=1241842&key2=7109560031&coll=DL&dl=ACM&ip=134.117.10.200&CFID=14473823&CFTOKEN=63252642

Web Caching (Andrew)

Introduction

In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching <ref name="visolve">Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. link</ref>. Web caches can either exist on the end user's machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server <ref name="webcaching.com"> Web Caching Overview. visited March 2011. link </ref>. Internet Service Providers have a key interest in web caching and in most cases implement their own caches <ref name="visolve"/><ref name="cisco">Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. link</ref>. There are a variety of incentives for entities on the internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:

  • Reduced Bandwidth Usage

One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage <ref name="visolve"/><ref name="webcaching.com"/><ref name="cisco"/><ref name="survey">Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 link</ref><ref name="docforge"> Web application/Caching. visited March 2011. last modified September 2010. link</ref>. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%<ref name="cisco"/>. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial<ref name="cisco"/>.

  • Improved End User Experience

Another benefit of web caching is the apparent reduction in latency to the end user <ref name="visolve"/><ref name="webcaching.com"/><ref name="survey"/><ref name="docforge"/>. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can returned a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience<ref name="docforge"/>.

  • Reduced Web Server Load

Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server <ref name="cisco"/><ref name="survey"/>. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs<ref name="docforge"/>.

Additional advantages include the added robustness that a web cache adds to the internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze internet usage patterns <ref name="survey"/>.

Web Caching Schemes

Since web caching has been identified as significant asset to the internet as a whole, it has received it's fair share of research. Many different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes <ref name="survey"/> identified the main architectures that a large scale web cache can have.

One of these is a hierarchical architecture. In such an architecture web caches are placed at different levels of a network, starting with the client's machine, followed by a local then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data is travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand.

Another potential architecture is distributed web caching. In such a structure there is only one level of caches that cooperate with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems <ref name="distributed1">Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 link</ref><ref name="distributed2"> Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS '99) link 1, link 2</ref> have been implemented and shown to be effective in realistic web traffic scenarios.

Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol <ref name="icp">D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.</ref> can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level <ref name="survey"/>

Web Caching as a Public Good

Web caching is obviously of enormous importance to the efficient functioning of the internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer's satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn't mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.

Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.

Once web caching becomes a public good it would also be in the end user's best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user's machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects<ref name="boinc>David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. link</ref>. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users' machines could be active participants in the caching, receiving their user's requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase <ref name="p2p">Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should internet service providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC '05). USENIX Association, Berkeley, CA, USA, 6-6. link</ref>.

Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users' computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.

Extending the Definition of Web Caching

If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to "live" closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users' data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.

Another added benefit of this new definition of web caching is that it would allow for individual fragments of the internet who, for one reason or another, become disconnected from the internet as a whole to still communicate through the cached web applications and data that it has stored in it's web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the internet is willfully disconnected from the rest of the internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.

Advantages of Having Web Caching as a Public Good

The following list is a summary of the major advantages of having web caching as a public good.

  • Further reduction of wasted bandwidth.
  • Further reduction of latency and improved end user experience.
  • Inherently guaranteed basic level of service.
  • Control in the hands of the users.
  • Incrementally deployable.

Disadvantages of Having Web Caching as a Public Good

  • Support costs.
  • Infrastructure costs.
  • Personal costs.

DNS (Fahim)

Introduction

DNS (Domain Name System) is considered by many as the "switchboard" of the internet. To make our internet work that much more user friendly, a user or application needs only supply a name, and the service returns the IP number and hostname. It is essential for the functionality and usability of the internet to have this service.

Given its necessity, the system is a good candidate to be considered a public good. The current provider falls under the responsibility of an user's Internet Service Provider (ISP). A user's ISP maintains the database of names to IP addresses for their users to use.

Implementation Overview

For the sake of simplicity, it will be assumed that the service provided works like a giant dynamic database where the request of a URL resolves to the returned value of an IP address.

For a standard user, an ISP takes care of the DNS service. It is understood by the user that all internet requests can be filtered or redirected as the ISP sees fit. For example, two of Canada's biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existant URL. This can be seen as helpful (in the event of typos) or a hinderance (suggestions based on advertising).

More knowledgable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google's public DNS project, or OpenDNS. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or "good samaritans" in a public community.

Issues for further Research and Development

  • With free, public DNS, where is this information about user behaviour going, if anywhere? Is this an example of a good that should be managed by a central/public/democratized authority?


Conclusion

General Public Goods

Key aspects to focus on:

  • robustness/reliability
  • basic guaranteed level of service
  • general speed
  • making user experience a priority over private interests

Other important aspects:

  • incrementally deployable

References

<references/>