DistOS-2011W Public Goods
Abstract
Public goods are resources that are held in common for the benefit of all. The internet is now such an important piece of our economy, culture and communications that the technologies that enable it to operate should be placed in trust for benefit of the entire population. In this paper we establish a model to help define public goods as they relate to the internet. Using three examples of public goods candidates (physical infrastructure, web caching, DNS) we illustrate the viability and benefits of this conversion. Finally we establish criteria with which to define other candidates for public goods.
Introduction
As societies have developed, communities have recognized the need for public goods. From simple shepherds to colonial empires and to current democratic superpowers, all societies have recognized a need for and identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”<ref name="wirelessRural">David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5 (September 2008. DOI=10.1145/1410064.1410068 link</ref> these public goods also provided a benefit to all the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water, fresh air are all example of public goods. We propose to add the internet to this long list. The internet is becoming a vital tool in nearly everyone's life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process. The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions.
- Which aspects of the Internet should be controlled by the public?
- How are these aspects identified?
- Are these aspects absolutely fundamental to the functionality of the Internet?
- What are the problems with how these aspects are controlled today?
- What are the advantages and disadvantages of having this aspect of the Internet as a public good?
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could removed from being solely in the hands of private companies and converted to the public good. These are the infrastructure of the internet, web caching and DNS. We chose these three pieces based on them being essential to the current operation of the internet. After doing this and examining the benefits of converting these three pieces of the Internet to public goods, we added another key question to be answered to the list above:
- What qualities do these potential public goods have in common?
Upon analysis, common benefits were identified and we believe these can provide a base criteria than can be used to identify other portions of the internet as candidates for the pubic good.
From Presentation
Generally speaking, a public good is:
- an entity deemed to be essential, beneficial and non-excludable to individuals and the public as a whole
- provided for users collectively, where the use by one does not preclude the use of the good by others
- managed completely by the public, who has overall control
- an entity where the publics best interest is paramount over private concerns
- ie. roads, parks, military, utilities, etc.
The Internet as a Public Good
- Universal access to the Internet will be essential
- The Internet as a whole is too large to effectively manage
- Certain aspects of the Internet should not be publicly controlled (ie. business)
Problem definition:
- What qualities do these potential public goods have in common?
Candidates for Public Goods
In the following sections, a few key examples of aspects of the internet that would be excellent candidates for becoming public goods will be presented.
Physical Infrastructure
Introduction
As the ubiquitous nature of the internet has unfolded people's dependence on it has increased. While the internet's roots exist in a serendipitous alignment of academic and military interests the internet quickly became a provider of entertainment and communication. Today the internet has enmeshed itself in the fabric of society and is a part of many people's daily ritual; people are reliant on the internet for business, communication and entertainment. For many, the internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the internet lies in the hands of private companies.
The private companies that currently own the infrastructure of the internet are referred to as ISPs (internet service providers) these are the entities that any user must pay to gain access to the internet currently. And for the purposes of this paper will will consider the servers, routers, switches, hubs, wires, fibre, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the internet and will not differentiate between these technologies.
Problems
A variety of problems arise when the with ISPs owning the infrastructure of the internet. These companies make decisions based on their own profit irrelevant of the public good. One problem currently experienced is packet shaping<ref name="wikipediaTrafficShaping"> Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. link</ref>. Packet shaping is currently used by ISPs control the speed of traffic and thus avoiding congestion and it does this by assigning priorities to packets using varies criteria decided by the ISPs. While it is good for everyone with the technology implemented by private companies we don't know what protocols are limited, by how much, if it's only done at peak times. We don't know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure. Another potential problem is the ISPs giving preferential to websites or webservices that have paid the ISP. This could be implemented by slowing our disallowing traffic to competitors. While this hasn't been proposed by ISPs and it has been fought against and the movement is known as Neb Neutrality<ref name="wikipediaNetNeutrality"> Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. link</ref>. More recently we have become acutely aware that ISPs provide convenient choke points. In Egypt during an uprising the incumbent government shutdown the populations access to the internet by simply forcing the ISPs to shutdown. These are not a conclusive list of weaknesses private ownership of the infrastructure presents there are a host of others but these few are cause for concern.
Alternatives
With the current importance of the internet an alternative to private ownership of the internet's infrastructure needs to be found; we provide two. The first is to have the government legislate the behaviour of the ISPs currently this is our only mechanism. This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the public. There are problems in that politician's have there own goals and can be influenced unduly by private industries though lobbyists and other means. Additionally the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a low preventing the current behaviour. These reasons make this option less than compelling. TThe other option is for the public to own the infrastructure of the internet. We are not proposing that the government take the infrastructure from the ISPs but that it creates it's own with the help of the people. This new infrastructure would coexist with the current ISPs and operate in parallel. Conceivably the speed of this new infrastructure wouldn't be as fast as the incumbents and people might desire higher speeds. In a structure analogous to the way maintenance of our roadways are organized this would be adopted at all levels of government, municipal, provincial and federal. This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure. The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries. We would like to describe one possible implementation to see what the concrete benefits might be in addition to reducing dependency on provate companies for infrastructure that should be a public good.
Implementation Description
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.
The reasons are many. A mesh also provides significant increases in robustness. A mesh presents not single point of connection so it can not be disabled as easily as current ISPs can be. Even if an portion of the mesh was partitioned from the internet it would continue to function within it's partition. Considering the significant portion of the population that use the interent to communicate this could be significant benefit in a disaster scenario. When other forms of communication relying on centralized infrastructure the mesh would continue to work. The mesh would provide a basic level of service this could be used for email Low maintenance and support. Very robust.
Advantages of Internet Infrastructure as a Public Good
- Increase in speed
- Increased reliability
- Universally provide a basic level of service
- Incrementally deployable
A mesh supports incremental roll out. It could start in a single neighborhood using the wireless of the neighbours to create small network. As the mesh increases in size the mesh can be self organizing with the nodes composing being elected to more prominent roles if they have sufficient speed. The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre. The density of connection points has been studied and there is relationship to the potential speeds that are sustainable again allowing incremental deployment but in the dimension of speed.
Disadvantage of Internet Infrastructure as a Public Good
" http://delivery.acm.org.proxy.library.carleton.ca/10.1145/1420000/1410068/p17-johnson.pdf?key1=1410068&key2=4619560031&coll=DL&dl=ACM&ip=134.117.10.200&CFID=14473823&CFTOKEN=63252642 Public infrastructure
- why mesh
- self organizing, low levels of support, has been used to get into rural ares.
- can be done incrementally, some towns in the us have been doing this.
- politically follows current boundaries of municipal, provincial, and federal
- why mesh
- the benefits
- basic level of service
- additional speed
- additional robustness
- essential
- the benefits
- Excellent early paper analyzing necessary density of mesh networks in an urban environment "Performance of Urban Mesh Networks" http://delivery.acm.org.proxy.library.carleton.ca/10.1145/1090000/1089492/p269-sridhara.pdf?key1=1089492&key2=0519560031&coll=DL&dl=ACM&ip=134.117.10.200&CFID=14473823&CFTOKEN=63252642
- Another interesting paper that really examines the concept of infrastructure as a public good in rural areas "Building Rural Wireless Networks: Lessons Learnt and
- A newer paper addressing scalability of adhoc-mesh networks with promising results "DART: Dynamic Address RouTing for Scalable
Ad Hoc and Mesh Networks" http://delivery.acm.org.proxy.library.carleton.ca/10.1145/1250000/1241842/p119-eriksson.pdf?key1=1241842&key2=7109560031&coll=DL&dl=ACM&ip=134.117.10.200&CFID=14473823&CFTOKEN=63252642
- BGP - this could be more important as well when replacing the current physical internet. Some routing federation would be need though http://en.wikipedia.org/wiki/Border_Gateway_Protocol
- Virtual Structure Freenet - http://en.wikipedia.org/wiki/Freenet
- Tangent - Antispam techniques http://delivery.acm.org/10.1145/1460000/1455525/a13-xie.pdf?key1=1455525&key2=3831189921&coll=DL&dl=ACM&ip=134.117.254.248&CFID=11136817&CFTOKEN=58039805
Web Caching
Introduction
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching <ref name="visolve">Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. link</ref>. Web caches can either exist on the end user's machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server <ref name="webcaching.com"> Web Caching Overview. visited March 2011. link </ref>. Internet Service Providers have a key interest in web caching and in most cases implement their own caches <ref name="visolve"/><ref name="cisco">Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. link</ref>. There are a variety of incentives for entities on the internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:
- Reduced Bandwidth Usage
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage <ref name="visolve"/><ref name="webcaching.com"/><ref name="cisco"/><ref name="survey">Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 link</ref><ref name="docforge"> Web application/Caching. visited March 2011. last modified September 2010. link</ref>. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%<ref name="cisco"/>. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial<ref name="cisco"/>.
- Improved End User Experience
Another benefit of web caching is the apparent reduction in latency to the end user <ref name="visolve"/><ref name="webcaching.com"/><ref name="survey"/><ref name="docforge"/>. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can returned a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience<ref name="docforge"/>.
- Reduced Web Server Load
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server <ref name="cisco"/><ref name="survey"/>. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs<ref name="docforge"/>.
Additional advantages include the added robustness that a web cache adds to the internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze internet usage patterns <ref name="survey"/>.
Web Caching Schemes
Since web caching has been identified as significant asset to the internet as a whole, it has received it's fair share of research. Many different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes <ref name="survey"/> identified the main architectures that a large scale web cache can have.
One of these is a hierarchical architecture. In such an architecture web caches are placed at different levels of a network, starting with the client's machine, followed by a local then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data is travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand.
Another potential architecture is distributed web caching. In such a structure there is only one level of caches that cooperate with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems <ref name="distributed1">Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 link</ref><ref name="distributed2"> Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS '99) link 1, link 2</ref> have been implemented and shown to be effective in realistic web traffic scenarios.
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol <ref name="icp">D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.</ref> can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level <ref name="survey"/>
Web Caching as a Public Good
Web caching is obviously of enormous importance to the efficient functioning of the internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer's satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn't mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.
Once web caching becomes a public good it would also be in the end user's best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user's machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects<ref name="boinc>David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. link</ref>. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users' machines could be active participants in the caching, receiving their user's requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase <ref name="p2p">Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should internet service providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC '05). USENIX Association, Berkeley, CA, USA, 6-6. link</ref>.
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users' computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.
Extending Web Caching to Full Web Application Caching
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to "live" closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users' data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the internet who, for one reason or another, become disconnected from the internet as a whole to still communicate through the cached web applications and data that it has stored in it's web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the internet is willfully disconnected from the rest of the internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.
Advantages of Having Web Caching as a Public Good
The following list is a summary of the major advantages of having web caching as a public good.
- Further reduction of wasted bandwidth.
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.
- Further reduction of latency and improved end user experience.
As noted above, with the massive increase in distributed local caching, the chances that users' web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user's immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server.
- Added Reliability/Robustness
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn't present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the internet's most popular web sites and applications.
- Inherently guaranteed basic level of service.
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently 'living' on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.
- Control in the hands of the users.
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.
- Incrementally deployable.
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.
Disadvantages of Having Web Caching as a Public Good
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below.
- Infrastructure costs.
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.
- Support costs.
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial.
- Personal costs.
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.
DNS (Fahim)
Introduction
DNS (Domain Name System) is considered by many as the "switchboard" of the internet. To make our internet work that much more user friendly, a user or application needs only supply a name, and the service returns the IP number and hostname. It is essential for the functionality and usability of the internet to have this service.
Given its necessity, the system is a good candidate to be considered a public good. The current provider falls under the responsibility of an user's Internet Service Provider (ISP). A user's ISP maintains the database of names to IP addresses for their users to use.
Implementation Overview
For the sake of simplicity, it will be assumed that the service provided works like a giant dynamic database where the request of a URL resolves to the returned value of an IP address.
For a standard user, an ISP takes care of the DNS service. It is understood by the user that all internet requests can be filtered or redirected as the ISP sees fit. For example, two of Canada's biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existant URL. This can be seen as helpful (in the event of typos) or a hinderance (suggestions based on advertising).
More knowledgable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google's public DNS project, or OpenDNS. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or "good samaritans" in a public community.
Issues for further Research and Development
- With free, public DNS, where is this information about user behaviour going, if anywhere? Is this an example of a good that should be managed by a central/public/democratized authority?
- The Design and Implementation of a Next Generation Name Service for the Internet I came across this paper while looking around, might be interesting. -Andrew
General Public Goods and the Internet
After analyzing the proposed candidates for public goods with respect to the internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.
- Essential Component of the Internet
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owing aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public's hands very quickly, which would end be very expensive. In general, novel aspects of the internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.
- Adds Robustness and Reliability
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public's hands can improve this, it will improve the overall effectiveness of the Internet.
- Ensure a Basic Level of Service
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in it's reach then it should not be considered a public good, by definition.
- Improve Performance
Performance is always a key metric when discussing any distributed system and it is a key concern here as well.
- Makes the User Experience a Priority
With all things considered, the end user's experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user's experience. The parties that control these resources have their own priorities and sometimes these are at odds with what would be best for the user.
- Incrementally Deployable
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be tried in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.
From Presentation
Transitioning a current aspect of the Internet into a public good should:
- add robustness and reliability
- ensure a basic level of service
- generally improve performance
- make the user experience a priority over private interests
- be incrementally deployable
Generally, potential disadvantages include:
- added infrastructure and support costs
- added complexity to application coders
Conclusion
From Presentation
- Since the Internet is becoming a ubiquitous entity, access to it is now essential
- Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good
- We have identified three aspects of the Internet as ideal candidates to become public goods
- These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good
References
<references/>
Miscellaneous
Members
- Lester Mundt - lmundt at connect.carleton.ca
- Fahim Rahman - frahman at connect.carleton.ca
- Andrew Schoenrock - aschoenr at scs.carleton.ca