<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Aschoenr</id>
	<title>Soma-notes - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://homeostasis.scs.carleton.ca/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Aschoenr"/>
	<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php/Special:Contributions/Aschoenr"/>
	<updated>2026-04-24T09:47:30Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.1</generator>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=18537</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=18537"/>
		<updated>2014-01-30T15:24:27Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week as - to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting in other words what was considered fundamental those days and where those stands today. It is to be noted that features that were easier to implement using simple mechanisms were carried forward where as the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto. One good example is that security aspects were never considered essential in those systems assuming them to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Doug died Jul 2 2013&lt;br /&gt;
* Doug himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies the required further investment got left behind.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=18536</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=18536"/>
		<updated>2014-01-30T15:23:31Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &#039;&#039;&#039;the point form notes for this lecture could be turned into prose or at least merged into one set of notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week as - to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting in other words what was considered fundamental those days and where those stands today. It is to be noted that features that were easier to implement using simple mechanisms were carried forward where as the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto. One good example is that security aspects were never considered essential in those systems assuming them to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Doug died Jul 2 2013&lt;br /&gt;
* Doug himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies the required further investment got left behind.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=18535</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=18535"/>
		<updated>2014-01-30T15:22:50Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &#039;&#039;&#039;the point form notes for this lecture could be turned into prose or at least merged into one set of notes&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week as - to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting in other words what was considered fundamental those days and where those stands today. It is to be noted that features that were easier to implement using simple mechanisms were carried forward where as the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto. One good example is that security aspects were never considered essential in those systems assuming them to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Doug died Jul 2 2013&lt;br /&gt;
* Doug himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies the required further investment got left behind.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18534</id>
		<title>DistOS 2014W Lecture 7</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_7&amp;diff=18534"/>
		<updated>2014-01-30T15:20:19Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project ==&lt;br /&gt;
&lt;br /&gt;
We discussed moving the proposal due date back a week. We also discussed spending the class prior to that date discussing the primary papers people had chosen in order to provide preliminary feedback. Anil spent some time going through the papers from OSDI12 and discussing which ones would make good projects and why.&lt;br /&gt;
&lt;br /&gt;
* Pick a primary paper.&lt;br /&gt;
* Find papers that cite that paper, papers it cites, etc. to collect a body of related work.&lt;br /&gt;
* Don&#039;t just give a history, tell a story!&lt;br /&gt;
* Do not try to summarize papers.&lt;br /&gt;
* Try to identify a pattern, a common ground between the papers.&lt;br /&gt;
&lt;br /&gt;
== Unix and Plan 9 ==&lt;br /&gt;
&lt;br /&gt;
UNIX was built as &amp;quot;a castrated version of Multics&amp;quot;, which was a very complex system. Multics was, arguably, so far ahead of its time that we are only just achieving their ambitions now. Unix was much more modest, and therefore much more achievable and successful. Just enough infrastructure to avoid reinventing the wheel. Just a couple of programmers making something for their own use. Unix was not designed as product or commercial entity at all. It was licensed out because AT&amp;amp;T was under severe antitrust scrutiny at the time.&lt;br /&gt;
&lt;br /&gt;
They wanted few, simple abstractions so they made everything a file. Berkeley promptly broke this abstraction by introducing sockets for networking. Plan 9 finally introduced networking using the right abstractions, but was too late. Arguably the reason the BSD folks didn&#039;t use the file abstraction was because of the difference in reliability. SUN microsystems licensed Berkeley Unix and commercialized it. Files are generally reliable, and failures with them are catastrophic so many applications simply didn&#039;t include logic to handle such IO errors. Networks are much less reliable and applications have to be able to deal gracefully with timeouts and other errors.&lt;br /&gt;
&lt;br /&gt;
In Anil&#039;s opinion Plan 9&#039;s design of using file abstraction to represent Network was n&#039;t a good design idea. The reason being file I/O breaking is uncommon but Network has an inherent flakiness and loss of connectivity is normal in networks. Using file system abstractions to represent Network does n&#039;t properly takes care of the flakiness inherent in the Network. Put in other words Network does n&#039;t have the reliability characteristics of mass storage and how to deal with this fact while using the file abstraction to deal with network was a major question which was left unanswered by the Plan 9 designers. Anil also added that Plan 9 was a elegant attempt at representing everything using file abstraction but they were trying too hard with this approach as pointed out above. In distributed systems the best approach to use is - if things have different semantics then they should have abstractions that reflect their characteristics, the APIs should reflect their characteristics rather than hide it away and try to pretend or treat them as if they were having characteristics of something else in an attempt towards too much generalizations. In Anil&#039;s opinion another reason why Plan 9 was not widely adopted was that it was a bit late to the scene, by the time Plan 9 came out in the 90s systems running UNIX with networking was widely adopted driven by the success of Internet.&lt;br /&gt;
&lt;br /&gt;
Another valuable point Anil mentioned was that for a technology to get adopted and become successful it should serve or address a niche area for which there are no successful incumbents.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Simon&#039;s Notes ==  &lt;br /&gt;
 &#039;&#039;&#039;These notes should be merged with the text above&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
* project proposal&lt;br /&gt;
** We will discuss the primary papers we&#039;ve chosen on Thursday, February 6th&lt;br /&gt;
* possible papers, remember to pick a topic you have some chance of understanding&lt;br /&gt;
** OSDI 2012 &lt;br /&gt;
*** datacenter (filesystems for doing X, heat management, etc...)&lt;br /&gt;
*** web stuff&lt;br /&gt;
*** distributed shared memory&lt;br /&gt;
*** distributed network I/O infrastructure&lt;br /&gt;
*** distributed databases (potentially)&lt;br /&gt;
*** anonymity systems&lt;br /&gt;
** Pick a conference (usenix is pretty systems oriented, maybe Lisa), go through their papers and find something interesting&lt;br /&gt;
** tell a story that connects several papers in the topic you choose&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* UNIX&lt;br /&gt;
** Relation to multics&lt;br /&gt;
*** Multics was a complex system which was bad because it was used less, slower, etc...&lt;br /&gt;
*** Multics was not for end users, it was designed to support &amp;quot;utility computing&amp;quot; wherein computation was a service to be charged for&lt;br /&gt;
** What?&lt;br /&gt;
*** Just enough infrastructure to run my programs&lt;br /&gt;
*** It was really just supposed to be used by programmers&lt;br /&gt;
*** &amp;quot;By programmers for programmers&amp;quot;&lt;br /&gt;
*** Software and source licensed for a nominal fee&lt;br /&gt;
*** &amp;quot;Everything is a file&amp;quot;&lt;br /&gt;
*** only difference was files that you could use seek or ones you couldn&#039;t&lt;br /&gt;
*** simple abstractions&lt;br /&gt;
** Networking&lt;br /&gt;
*** Berkeley folks made sockets, not files which upset the folks at Bell labs&lt;br /&gt;
*** Networks aren&#039;t exactly like files because they&#039;re unreliable&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Plan 9&lt;br /&gt;
** major ideas&lt;br /&gt;
*** procfs, later adopted by linux&lt;br /&gt;
** summary&lt;br /&gt;
*** a very elegant attempt to follow the philosophy &amp;quot;everything is a file&amp;quot;&lt;br /&gt;
*** trying too hard&lt;br /&gt;
** opinions&lt;br /&gt;
*** things that have different failure modes deserve different APIs&lt;br /&gt;
** niche?&lt;br /&gt;
*** they never found one&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Tangent about programming languages&lt;br /&gt;
** C was for system programming&lt;br /&gt;
** Java was for enterprise programming&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=18510</id>
		<title>DistOS 2014W Lecture 6</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_6&amp;diff=18510"/>
		<updated>2014-01-28T05:47:49Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &#039;&#039;&#039;the point form notes for this lecture could be turned into full sentences/paragraphs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Group Discussion on &amp;quot;The Early Web&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
Questions to discuss:&lt;br /&gt;
&lt;br /&gt;
# How do you think the web would have been if not like the present way? &lt;br /&gt;
# What kind of infrastructure changes would you like to make? &lt;br /&gt;
&lt;br /&gt;
=== Group 1 ===&lt;br /&gt;
: Relatively satisfied with the present structure of the web some changes suggested are in the below areas: &lt;br /&gt;
* Make use of the greater potential of Protocols &lt;br /&gt;
* More communication and interaction capabilities.&lt;br /&gt;
* Implementation changes in the present payment method systems. Example usage of &amp;quot;Micro-computation&amp;quot; - a discussion we would get back to in future classes. Also, Cryptographic currencies.&lt;br /&gt;
* Augmented reality.&lt;br /&gt;
* More towards individual privacy. &lt;br /&gt;
&lt;br /&gt;
=== Group 2 ===&lt;br /&gt;
==== Problem of unstructured information ====&lt;br /&gt;
A large portion of the web serves content that is overwhelmingly concerned about presentation rather than structuring content. Tim Berner-Lees himself bemoaned the death of the semantic web. His original vision of it was as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Code from Wikipedia&#039;s article on the semantic web, except for the block quoting form, which this MediaWiki instance doens&#039;t seem to support. --&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web&amp;amp;nbsp;– the content, links, and transactions between people and computers. A &amp;quot;Semantic Web&amp;quot;, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The &amp;quot;intelligent agents&amp;quot; people have touted for ages will finally materialize.&amp;lt;ref&amp;gt;{{cite book |last=Berners-Lee |first=Tim |authorlink=Tim Berners-Lee |coauthors=Fischetti, Mark |title=Weaving the Web |publisher=HarperSanFrancisco |year=1999 |pages=chapter 12 |isbn=978-0-06-251587-2 |nopp=true }}&amp;lt;/ref&amp;gt;&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For this vision to be true, information arguably needs to be structured, maybe even classified. The idea of a universal information classification system has been floated. The modern web is mostly developed by software developers and similar, not librarians and the like.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- TODO: Yahoo blurb. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also, how does one differentiate satire from fact?&lt;br /&gt;
&lt;br /&gt;
==== Valuation and deduplication of information ====&lt;br /&gt;
Another problem common with the current wwww is the duplication of information. Redundancy is not in itself harmful to increase the availability of information, but is ad-hoc duplication of the information itself?&lt;br /&gt;
&lt;br /&gt;
One then comes to the problem of assigning a value to the information found therein. How does one rate information, and according to what criteria? How does one authenticate the information? Often, popularity is used as an indicator of veracity, almost in a sophistic manner. See excessive reliance on Google page ranking or Reddit score for various types of information consumption for research or news consumption respectively.&lt;br /&gt;
&lt;br /&gt;
=== On the current infrastructure ===&lt;br /&gt;
The current &amp;lt;em&amp;gt;internet&amp;lt;/em&amp;gt; infrastructure should remain as is, at least in countries with not just a modicum of freedom of access to information. Centralization of of control of access to information is a terrible power. See China and parts of the Middle-East. On that note, what can be said of popular sites, such as Google or Wikipedia that serve as the main entry point for many access patterns?&lt;br /&gt;
&lt;br /&gt;
The problem, if any, in the current web infrastructure is of the web itself, not the internet.&lt;br /&gt;
&lt;br /&gt;
=== Group 3 ===&lt;br /&gt;
* What we want to keep &lt;br /&gt;
** Linking mechanisms&lt;br /&gt;
** Minimum permissions to publish&lt;br /&gt;
* What we don&#039;t like&lt;br /&gt;
** Relying on one source for document &lt;br /&gt;
** Privacy links for security&lt;br /&gt;
* Proposal &lt;br /&gt;
** Peer-peer to distributed mechanisms for documenting&lt;br /&gt;
** Reverse links with caching - distributed cache&lt;br /&gt;
** More availability for user - what happens when system fails? &lt;br /&gt;
** Key management to be considered - Is it good to have centralized or distributed mechanism? &lt;br /&gt;
&lt;br /&gt;
=== Group 4 ===&lt;br /&gt;
* An idea of web searching for us &lt;br /&gt;
* A suggestion of a different web if it would have been implemented by &amp;quot;AI&amp;quot; people&lt;br /&gt;
** AI programs searching for data - A notion already being implemented by Google slowly.&lt;br /&gt;
* Generate report forums&lt;br /&gt;
* HTML equivalent is inspired by the AI communication&lt;br /&gt;
* Higher semantics apart from just indexing the data&lt;br /&gt;
** Problem : &amp;quot;How to bridge the semantic gap?&amp;quot;&lt;br /&gt;
** Search for more data patterns&lt;br /&gt;
&lt;br /&gt;
== Group design exercise — The web that could be ==&lt;br /&gt;
&lt;br /&gt;
* “The web that wasn&#039;t” mentioned the moans of librarians.&lt;br /&gt;
* A universal classification system is needed.&lt;br /&gt;
* The training overhead of classifiers (e.g., librarians) is high. See the master&#039;s that a librarian would need.&lt;br /&gt;
* More structured content, both classification, and organization&lt;br /&gt;
* Current indexing by crude brute-force searching for words, etc., rather than searching metadata&lt;br /&gt;
* Information doesn&#039;t have the same persistence, see bitrot and Vint Cerf&#039;s talk.&lt;br /&gt;
* Too concerned with presentation now.&lt;br /&gt;
* Tim Berner-Lees bemoaning the death of the semantic web.&lt;br /&gt;
* The problem of information duplication when information gets redistributed across the web. However, we do want redundancy.&lt;br /&gt;
* Too much developed by software developers&lt;br /&gt;
* Too reliant on Google for web structure&lt;br /&gt;
** See search-engine optimization&lt;br /&gt;
* Problem of authentication (of the information, not the presenter)&lt;br /&gt;
** Too dependent at times on the popularity of a site, almost in a sophistic manner.&lt;br /&gt;
** See Reddit&lt;br /&gt;
* How do you programmatically distinguish satire from fact&lt;br /&gt;
* The web&#039;s structure is also “shaped by inbound links but would be nice a bit more”&lt;br /&gt;
* Infrastructure doesn&#039;t need to change per se.&lt;br /&gt;
** The distributed architecture should still stay. Centralization of control of allowed information and access is terrible power. See China and the Middle-East.&lt;br /&gt;
** Information, for the most part, in itself, exists centrally (as per-page), though communities (to use a generic term) are distributed.&lt;br /&gt;
* Need more sophisticated natural language processing.&lt;br /&gt;
&lt;br /&gt;
== Class discussion ==&lt;br /&gt;
&lt;br /&gt;
Focusing on vision, not the mechanism.&lt;br /&gt;
&lt;br /&gt;
* Reverse linking&lt;br /&gt;
* Distributed content distribution (glorified cache)&lt;br /&gt;
** Both for privacy and redunancy reasons&lt;br /&gt;
** Suggested centralized content certification, but doesn&#039;t address the problem of root of trust and distributed consistency checking.&lt;br /&gt;
*** Distributed key management is a holy grail&lt;br /&gt;
*** What about detecting large-scale subversion attempts, like in China&lt;br /&gt;
* What is the new revenue model?&lt;br /&gt;
** What was TBL&#039;s revenue model (tongue-in-cheek, none)?&lt;br /&gt;
** Organisations like Google monetized the internet, and this mechanism could destroy their ability to do so.&lt;br /&gt;
* Search work is semi-distributed. Suggested letting the web do the work for you.&lt;br /&gt;
* Trying to structure content in a manner simultaneously palatable to both humans and machines.&lt;br /&gt;
* Using spare CPU time on servers for natural language processing (or other AI) of cached or locally available resources.&lt;br /&gt;
* Imagine a smushed Wolfram Alpha, Google, Wikipedia, and Watson, and then distributed over the net.&lt;br /&gt;
* The document was TBL&#039;s idea of the atom of content, whereas nowaday we really need something more granular.&lt;br /&gt;
* We want to extract higher-level semantics.&lt;br /&gt;
* Google may not be pure keyword search anymore. It is essentially AI now, but we still struggle with expressing what we want to Google.&lt;br /&gt;
* What about the adversarial aspect of content hosters, vying for attention?&lt;br /&gt;
* People do actively try to fool you.&lt;br /&gt;
* Compare to Google News, though that is very specific to that domain. Their vision is a semantic web, but they are incrementally building it.&lt;br /&gt;
* In a scary fashion, Google is one of the central points of failure of the web. Even scarier is less technically competent people who depend on Facebook for that.&lt;br /&gt;
* There is a semantic gap between how we express and query information, and how AI understands it.&lt;br /&gt;
* Can think of Facebook as a distributed human search infrastructure.&lt;br /&gt;
* A core service of an operating system is locating information. &#039;&#039;&#039;Search is infrastructure.&#039;&#039;&#039;&lt;br /&gt;
* The problem is not purely technical. There are political and social aspects.&lt;br /&gt;
** Searching for a file on a local filesystem should have a unambiguous answer.&lt;br /&gt;
** Asking the web is a different thing. “What is the best chocolate bar?”&lt;br /&gt;
* Is the web a network database, as understood in COMP 3005, which we consider harmful.&lt;br /&gt;
* For two-way links, there is the problem of restructuring data and all the dependencies.&lt;br /&gt;
* Privacy issues when tracing paths across the web.&lt;br /&gt;
* What about the problem of information revocation?&lt;br /&gt;
* Need more augmented reality and distributed and micro payment systems.&lt;br /&gt;
* We need distributed, mutually untrusting social networks.&lt;br /&gt;
** Now we have the problem of storage and computation, but also take away some of of the monetizationable aspect.&lt;br /&gt;
* Distribution is not free. It is very expensive in very funny ways.&lt;br /&gt;
* The dream of harvesting all the computational power of the internet is not new.&lt;br /&gt;
** Startups have come and gone many times over that problem.&lt;br /&gt;
* Google&#039;s indexers understands quite well many documents on the web. However, it only &#039;&#039;&#039;presents&#039;&#039;&#039; a primitive keyword-like interface. It doesn&#039;t expose the ontology.&lt;br /&gt;
* Organising information does not necessarily mean applying an ontology to it.&lt;br /&gt;
* The organisational methods we now use don&#039;t use ontologies, but rather are supplemented by them.&lt;br /&gt;
&lt;br /&gt;
Adding couple of related points Anil mentioned during the discussion:&lt;br /&gt;
*Distributed key management is a holy grail no one has ever managed to get it working.&lt;br /&gt;
*Now a days databases have become important building blocks of the Distributed Operating System. Anil stressed the fact that Databases can in fact be considered as an OS service these days.&lt;br /&gt;
*The question “How you navigate the complex information space?” has remained a prominent question that The Web have always faced.&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=18509</id>
		<title>DistOS 2014W Lecture 5</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_5&amp;diff=18509"/>
		<updated>2014-01-28T05:47:14Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &#039;&#039;&#039;the point form notes for this lecture could be turned into full sentences/paragraphs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
Anil set the theme of the discussion for the week as - to try and understand what the early visionaries/researchers wanted the computer to be and what it has become. Putting in other words what was considered fundamental those days and where those stands today. It is to be noted that features that were easier to implement using simple mechanisms were carried forward where as the ones which demanded more complex systems or the one which were found out to add not much value in the near feature were pegged down in the order. In the same context following observations were made: (1) truly distributed computational infrastructure really makes sense only when we have something to distribute (2) use cases drive the large distributed systems, a good example is The Web. Another key observation from Anil was that there was always a Utopian aspect to the early systems be it NLS, ARPANET or Alto. One good example is that security aspects were never considered essential in those systems assuming them to operate in a trusted environment. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
; Operating system&lt;br /&gt;
: The software that turns the computer you have into the one you want (Anil)&lt;br /&gt;
&lt;br /&gt;
* What sort of computer did we want to have?&lt;br /&gt;
* What sort of abstractions did they want to be easy? Hard?&lt;br /&gt;
* What could we build with the internet (not just WAN, but also LAN)?&lt;br /&gt;
* Most dreams people had of their computers smacked into the wall of reality.&lt;br /&gt;
&lt;br /&gt;
= MOAD review in groups =&lt;br /&gt;
&lt;br /&gt;
* Chorded keyboard unfortunately obscure, partly because the attendees disagreed with the long-term investment of training the user.&lt;br /&gt;
* View control → hyperlinking system, but in a lightweight (more like nanoweight) markup language.&lt;br /&gt;
* Ad-hoc ticketing system&lt;br /&gt;
* Ad-hoc messaging system&lt;br /&gt;
** Used on a time-sharing systme with shared storage,&lt;br /&gt;
* Primitive revision control system&lt;br /&gt;
* Different vocabulary:&lt;br /&gt;
** Bug and bug smear (mouse and trail)&lt;br /&gt;
** Point rather than click&lt;br /&gt;
&lt;br /&gt;
= Class review =&lt;br /&gt;
&lt;br /&gt;
* Doug died Jul 2 2013&lt;br /&gt;
* Doug himself called it an “online system”, rather than offline composition of code using card punchers as was common in the day.&lt;br /&gt;
* What became of the tech:&lt;br /&gt;
** Chorded keyboards:&lt;br /&gt;
*** Exist but obscure&lt;br /&gt;
** Pre-ARPANET network:&lt;br /&gt;
*** Time-sharing mainframe&lt;br /&gt;
*** 13 workstations&lt;br /&gt;
*** Telephone and television circuit&lt;br /&gt;
** Mouse&lt;br /&gt;
*** “I sometimes apologize for calling it a mouse”&lt;br /&gt;
** Collaborative document editing integrated with screen sharing&lt;br /&gt;
** Videoconferencing&lt;br /&gt;
*** Part of the vision, but more for the demo at the time,&lt;br /&gt;
** Hyperlinks&lt;br /&gt;
*** The web on a mainframe&lt;br /&gt;
** Languages&lt;br /&gt;
*** Metalanguages&lt;br /&gt;
**** “Part and parcel of their entire vision of augmenting human intelligence.”&lt;br /&gt;
**** You must teach the computer about the language you are using.&lt;br /&gt;
**** They were the use case. It was almost designed more for augmenting programmer intelligence rather than human intelligence.&lt;br /&gt;
*** It was normal for the time to build new languages (domain-specific) for new systems. Nowadays, we standardize on one but develop large APIs, at the expense of conciseness. We look for short-term benefits; we minimize programmer effort.&lt;br /&gt;
*** Compiler compiler&lt;br /&gt;
** Freeze-pane&lt;br /&gt;
** Folding—Zoomable UI (ZUI)&lt;br /&gt;
*** Lots of systems do it, but not the default&lt;br /&gt;
*** Much easier to just present everything.&lt;br /&gt;
** Technologies the required further investment got left behind.&lt;br /&gt;
* The NLS had little to no security&lt;br /&gt;
** There was a minimal notion of a user&lt;br /&gt;
** There was a utopian aspect. Meanwhile, the Mac had no utopian aspect. Data exchange was through floppies. Any network was small, local, ad-hoc, and among trusted peers.&lt;br /&gt;
** The system wasn&#039;t envisioned to scale up to masses of people who didn&#039;t trust each other.&lt;br /&gt;
** How do you enforce secrecy.&lt;br /&gt;
* Part of the reason for lack of adoption of some of the tech was hardware. We can posit that a bigger reason would be infrastructure.&lt;br /&gt;
* Differentiate usability of system from usability of vision&lt;br /&gt;
** What was missing was the polish, the ‘sexiness’, and the intuitiveness of later systems like the Apple II and the Lisa.&lt;br /&gt;
** The usability of the later Alto is still less than commercial systems.&lt;br /&gt;
*** The word processor was modal, which is apt to confuse unmotivated and untrained users.&lt;br /&gt;
* In the context of the Mother of All Demos, the Alto doesn&#039;t seem entirely revolutionary. Xerox PARC raided his team. They almost had a GUI; rather they had what we call today a virtual console, with a few things above.&lt;br /&gt;
* What happens with visionaries that present a big vision is that the spectators latch onto specific aspects.&lt;br /&gt;
* To be comfortable with not adopting the vision, one must ostracize the visionary. People pay attention to things that fit into their world view.&lt;br /&gt;
* Use cases of networking have changed little, though the means did&lt;br /&gt;
* Fundamentally a resource-sharing system; everything is shared, unlike later systems where you would need to explicitly do so. Resources shared fundamentally sense to share: documents, printers, etc.&lt;br /&gt;
* Resource sharing was never enough. &#039;&#039;&#039;Information-sharing&#039;&#039;&#039; was the focus.&lt;br /&gt;
&lt;br /&gt;
“Mother of all demos” is nickname for Engelbart who could make the computers help humans become smarter. &lt;br /&gt;
&lt;br /&gt;
*More interesting in this works that:&lt;br /&gt;
&amp;quot;His idea included seeing computing devices as a means to communicate and retrieve information, rather than just crunch numbers. This idea is represented in NLS”On-Line system”.&lt;br /&gt;
&lt;br /&gt;
*Some information about  NLS system:&lt;br /&gt;
1) NLS was a revolutionary computer collaboration system from the 1960s. &lt;br /&gt;
2) Designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). &lt;br /&gt;
3) The NLS system was the first to employ the practical use of :&lt;br /&gt;
  a) hypertext links,&lt;br /&gt;
  b) the mouse, &lt;br /&gt;
  c) raster-scan video monitors, &lt;br /&gt;
  d) information organized by relevance, &lt;br /&gt;
  e) screen windowing, &lt;br /&gt;
  f) presentation programs, &lt;br /&gt;
  g) and other modern computing concepts.&lt;br /&gt;
&lt;br /&gt;
= Alto review =&lt;br /&gt;
&lt;br /&gt;
* Fundamentally a personal computer&lt;br /&gt;
* Applications:&lt;br /&gt;
** Drawing program with curves and arcs for drawing&lt;br /&gt;
** Hardware design tools (mostly logic boards)&lt;br /&gt;
** Time server&lt;br /&gt;
* Less designed for reading than the NLS. More designed around paper. Xerox had a laser printer, and you would read what you printed. Hypertext was deprioritized, unlike the NLS vision had focused on what could not be expressed on paper.&lt;br /&gt;
* Xerox had almost an obsession with making documents print beautifully.&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_2&amp;diff=18452</id>
		<title>DistOS 2014W Lecture 2</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_2&amp;diff=18452"/>
		<updated>2014-01-21T15:09:19Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; &#039;&#039;&#039;this section needs work&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
(Not sure who originally volunteered to add this lecture, but they haven&#039;t put it up so I&#039;m uploading my incomplete notes. Hopefully somebody will be able to fill it in with more detail.)&lt;br /&gt;
&lt;br /&gt;
We now have a working definition of a Distributed OS, so we look a little closer at the underlying network. The internet (and thus the vast majority of distributed OS work today) occurs over the [https://en.wikipedia.org/wiki/TCP_IP| TCP and IP protocols].&lt;br /&gt;
&lt;br /&gt;
Anil observed that the Dist. OS abstractions which succeed are ones that don&#039;t hide the network. For example, the remote procedure call (RPC) style abstractions have generally failed because they try to hide the untrusted nature of the network. The result has been a hodge-podge of firewall software which is primarily for blocking RPC-based protocols like SMB, NFS, etc. REST, on the other hand, has succeeded on the open web because it doesn&#039;t &amp;quot;hide the network&amp;quot; in this way.&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_1&amp;diff=18383</id>
		<title>DistOS 2014W Lecture 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_1&amp;diff=18383"/>
		<updated>2014-01-14T15:10:35Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;What is an OS?&#039;&#039;&#039; Here are some ideas of what it could mean:&lt;br /&gt;
* a hardware abstraction&lt;br /&gt;
* Consistent execution environment. (ie. code written to interface -- think portable code)&lt;br /&gt;
* manages I/O&lt;br /&gt;
* Resource management/Multiplexing&lt;br /&gt;
* Communication infrastructure (example Inter Process Communication mechanisms) between the users (process, applications) of the Operating System.&lt;br /&gt;
&lt;br /&gt;
An OS can be defined by the role it plays in the programming of systems. It takes care of resource management and creates abstraction. An OS turns hardware into the computer/api/interface you WANT to program.&lt;br /&gt;
&lt;br /&gt;
This is similar to how the browser is becoming the OS of the web. The browser is&lt;br /&gt;
the key abstraction needed to run web apps. It is the interface web developers target.&lt;br /&gt;
It doesn&#039;t matter what you consume a given website on (eg. a phone, tablet,&lt;br /&gt;
etc.), the browser abstracts the device&#039;s hardware and OS away.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;So, what&#039;s a distributed OS?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Anil prefers to think of this &#039;logically&#039; than functionally/physically.  This is&lt;br /&gt;
because the old distributed operating system (DOS) model applies to today&#039;s systems&lt;br /&gt;
(ie. managing multiple cores, etc). The traditional definition is systems that&lt;br /&gt;
manage their resources over a Network.&lt;br /&gt;
&lt;br /&gt;
A lot of these definitions are hard to peg down because simplicity always gets in&lt;br /&gt;
the way of truth. These concepts to do not fit into well defined classes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Anil&#039;s definition&#039;&#039;&#039;: &amp;quot;taking the distributed pieces of a system you have and&lt;br /&gt;
turning it into the system you WANT.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It is good to think about about DOS&#039;s within the context of who/what is in&lt;br /&gt;
control, in terms of who makes and enforces decisions in DOS. The traditional kernel-process model is a dictatorship. Authoritarian&lt;br /&gt;
model of control. The kernel controls what lives or dies.  The internet, by&lt;br /&gt;
contrast, is decentralised (eg. DNS). Distributed systems may have distributed&lt;br /&gt;
policies where there is not one source of power.Even in DOS paradigm we can see instances of authoritarian/centralized approaches one example being the walled garden model employed by Apple iOS. Anil&#039;s observation is that centralized systems has an inherent fragility built into and these kind of systems comes to existence and disappear after a while. Examples being AOL, Myspace. Even the Facebook also looks to be a possible candidate for a similar fate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;&#039;&#039;This section below needs to be integrated into the notes above. Anyone can feel free to do this&#039;&#039;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yuan Liu&#039;s Notes   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(Normal) Operating Systems&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
OS allows you to run on (slightly) different hardware. Functionalities and responsibilities of OSes include:&lt;br /&gt;
&lt;br /&gt;
* abstracts hardware such that hardware resources can be accessed by software&lt;br /&gt;
* provides consistent execution environment (which hardware doesn&#039;t provide)&lt;br /&gt;
* manages I/O (such as user I/O, machine I/O i.e. network I/O, sensors, videos, etc.)&lt;br /&gt;
* manages resources via mulitplexing&lt;br /&gt;
* multiplexing (sharing): one resource wanted by multiple users&lt;br /&gt;
* O/S turns a computer you want to a computer you want to program&lt;br /&gt;
* manages synchronization and concurrency issues&lt;br /&gt;
* resource management and abstraction&lt;br /&gt;
* uses policies to manage resources&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed O/S&#039;&#039;&#039;&lt;br /&gt;
* turns a distributed system (with their hardware) into a distributed system you want to program&lt;br /&gt;
* resource management: who is in charge?&lt;br /&gt;
* in local O/S, the kernel is the boss&lt;br /&gt;
* in distributed O/S, the control is decentralized&lt;br /&gt;
* different humans control their machine&lt;br /&gt;
* has distributed policies for managing resources&lt;br /&gt;
* who decides control? different than local O/S&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Other thoughts&#039;&#039;&#039;&lt;br /&gt;
* a more centralized system will become fragile later&lt;br /&gt;
* concentration of policy tend to fall apart in the future, according to Anil&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_1&amp;diff=18381</id>
		<title>DistOS 2014W Lecture 1</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS_2014W_Lecture_1&amp;diff=18381"/>
		<updated>2014-01-14T05:39:48Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;What is an OS?&#039;&#039;&#039; Here are some ideas of what it could mean:&lt;br /&gt;
* a hardware abstraction&lt;br /&gt;
* Consistent execution environment. (ie. code written to interface -- think portable code)&lt;br /&gt;
* manages I/O&lt;br /&gt;
* Resource management/Multiplexing&lt;br /&gt;
* Communication infrastructure (example Inter Process Communication mechanisms) between the users (process, applications) of the Operating System.&lt;br /&gt;
&lt;br /&gt;
An OS can be defined by the role it plays in the programming of systems. It takes care of resource management and creates abstraction. An OS turns hardware into the computer/api/interface you WANT to program.&lt;br /&gt;
&lt;br /&gt;
This is similar to how the browser is becoming the OS of the web. The browser is&lt;br /&gt;
the key abstraction needed to run web apps. It is the interface web developers target.&lt;br /&gt;
It doesn&#039;t matter what you consume a given website on (eg. a phone, tablet,&lt;br /&gt;
etc.), the browser abstracts the device&#039;s hardware and OS away.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;So, what&#039;s a distributed OS?&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Anil prefers to think of this &#039;logically&#039; than functionally/physically.  This is&lt;br /&gt;
because the old distributed operating system (DOS) model applies to today&#039;s systems&lt;br /&gt;
(ie. managing multiple cores, etc). The traditional definition is systems that&lt;br /&gt;
manage their resources over a Network.&lt;br /&gt;
&lt;br /&gt;
A lot of these definitions are hard to peg down because simplicity always gets in&lt;br /&gt;
the way of truth. These concepts to do not fit into well defined classes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Anil&#039;s definition&#039;&#039;&#039;: &amp;quot;taking the distributed pieces of a system you have and&lt;br /&gt;
turning it into the system you WANT.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It is good to think about about DOS&#039;s within the context of who/what is in&lt;br /&gt;
control, in terms of who makes and enforces decisions in DOS. The traditional kernel-process model is a dictatorship. Authoritarian&lt;br /&gt;
model of control. The kernel controls what lives or dies.  The internet, by&lt;br /&gt;
contrast, is decentralised (eg. DNS). Distributed systems may have distributed&lt;br /&gt;
policies where there is not one source of power.Even in DOS paradigm we can see instances of authoritarian/centralized approaches one example being the walled garden model employed by Apple iOS. Anil&#039;s observation is that centralized systems has an inherent fragility built into and these kind of systems comes to existence and disappear after a while. Examples being AOL, Myspace. Even the Facebook also looks to be a possible candidate for a similar fate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &#039;&#039;&#039;&#039;&#039;*This section below needs to be integrated into the notes above. Anyone can feel free to do this&#039;&#039;&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Yuan Liu&#039;s Notes   &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;(Normal) Operating Systems&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
OS allows you to run on (slightly) different hardware. Functionalities and responsibilities of OSes include:&lt;br /&gt;
&lt;br /&gt;
* abstracts hardware such that hardware resources can be accessed by software&lt;br /&gt;
* provides consistent execution environment (which hardware doesn&#039;t provide)&lt;br /&gt;
* manages I/O (such as user I/O, machine I/O i.e. network I/O, sensors, videos, etc.)&lt;br /&gt;
* manages resources via mulitplexing&lt;br /&gt;
* multiplexing (sharing): one resource wanted by multiple users&lt;br /&gt;
* O/S turns a computer you want to a computer you want to program&lt;br /&gt;
* manages synchronization and concurrency issues&lt;br /&gt;
* resource management and abstraction&lt;br /&gt;
* uses policies to manage resources&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Distributed O/S&#039;&#039;&#039;&lt;br /&gt;
* turns a distributed system (with their hardware) into a distributed system you want to program&lt;br /&gt;
* resource management: who is in charge?&lt;br /&gt;
* in local O/S, the kernel is the boss&lt;br /&gt;
* in distributed O/S, the control is decentralized&lt;br /&gt;
* different humans control their machine&lt;br /&gt;
* has distributed policies for managing resources&lt;br /&gt;
* who decides control? different than local O/S&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Other thoughts&#039;&#039;&#039;&lt;br /&gt;
* a more centralized system will become fragile later&lt;br /&gt;
* concentration of policy tend to fall apart in the future, according to Anil&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9528</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9528"/>
		<updated>2011-04-12T04:17:14Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Members */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt;Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Wikipedia/Domain Name System. Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist. Rogers Implements New Approach On Failed DNS Lookups. July 18, 2008 - Accessed March 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Eric Bangeman. Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects. arstechnica.com. July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries. Slashdot.org. August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS [http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;JR Raphael. Google Public DNS: Good for privacy? PCWorld.com. December 2009 - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;David Coursey. Google Public DNS: Wonderful Freebie or Big New Menace? PCWorld.com. December 2009. - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011 [http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is mandatory. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. Any potential public good should provide some kind of performance improvement, or there may be no real point in making it a public good.&lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates. Additional benefits of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage. On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt&lt;br /&gt;
*Fahim Rahman&lt;br /&gt;
*Andrew Schoenrock&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9527</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9527"/>
		<updated>2011-04-12T04:16:47Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Alternative/Public */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt;Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Wikipedia/Domain Name System. Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist. Rogers Implements New Approach On Failed DNS Lookups. July 18, 2008 - Accessed March 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Eric Bangeman. Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects. arstechnica.com. July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries. Slashdot.org. August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS [http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;JR Raphael. Google Public DNS: Good for privacy? PCWorld.com. December 2009 - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;David Coursey. Google Public DNS: Wonderful Freebie or Big New Menace? PCWorld.com. December 2009. - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011 [http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is mandatory. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. Any potential public good should provide some kind of performance improvement, or there may be no real point in making it a public good.&lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates. Additional benefits of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage. On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9526</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9526"/>
		<updated>2011-04-12T04:15:38Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* DNS Evolution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt;Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Wikipedia/Domain Name System. Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist. Rogers Implements New Approach On Failed DNS Lookups. July 18, 2008 - Accessed March 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Eric Bangeman. Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects. arstechnica.com. July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries. Slashdot.org. August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS [http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;JR Raphael. Google Public DNS: Good for privacy? PCWorld.com. December 2009 - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011 [http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is mandatory. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. Any potential public good should provide some kind of performance improvement, or there may be no real point in making it a public good.&lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates. Additional benefits of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage. On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9525</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9525"/>
		<updated>2011-04-12T04:14:56Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Alternative/Public */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt;Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Wikipedia/Domain Name System. Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist. Rogers Implements New Approach On Failed DNS Lookups. July 18, 2008 - Accessed March 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Eric Bangeman. Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects. arstechnica.com. July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries. Slashdot.org. August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS [http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;JR Raphael. Google Public DNS: Good for privacy? PCWorld.com. December 2009 - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is mandatory. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. Any potential public good should provide some kind of performance improvement, or there may be no real point in making it a public good.&lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates. Additional benefits of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage. On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9524</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9524"/>
		<updated>2011-04-12T04:13:23Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Alternative/Public */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt;Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Wikipedia/Domain Name System. Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist. Rogers Implements New Approach On Failed DNS Lookups. July 18, 2008 - Accessed March 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Eric Bangeman. Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects. arstechnica.com. July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries. Slashdot.org. August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS [http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is mandatory. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. Any potential public good should provide some kind of performance improvement, or there may be no real point in making it a public good.&lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates. Additional benefits of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage. On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9523</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9523"/>
		<updated>2011-04-12T04:12:51Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* ISP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt;Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Wikipedia/Domain Name System. Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist. Rogers Implements New Approach On Failed DNS Lookups. July 18, 2008 - Accessed March 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Eric Bangeman. Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects. arstechnica.com. July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries. Slashdot.org. August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is mandatory. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. Any potential public good should provide some kind of performance improvement, or there may be no real point in making it a public good.&lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates. Additional benefits of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage. On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9522</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9522"/>
		<updated>2011-04-12T04:10:49Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* ISP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt;Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Wikipedia/Domain Name System. Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist. Rogers Implements New Approach On Failed DNS Lookups. July 18, 2008 - Accessed March 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica. Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org. Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is mandatory. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. Any potential public good should provide some kind of performance improvement, or there may be no real point in making it a public good.&lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates. Additional benefits of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage. On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9521</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9521"/>
		<updated>2011-04-12T04:09:42Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Problems */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt;Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Wikipedia/Domain Name System. Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is mandatory. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. Any potential public good should provide some kind of performance improvement, or there may be no real point in making it a public good.&lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates. Additional benefits of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage. On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9520</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9520"/>
		<updated>2011-04-12T04:09:08Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* DNS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Wikipedia/Domain Name System. Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is mandatory. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. Any potential public good should provide some kind of performance improvement, or there may be no real point in making it a public good.&lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates. Additional benefits of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage. On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9519</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9519"/>
		<updated>2011-04-12T04:08:27Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is mandatory. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. Any potential public good should provide some kind of performance improvement, or there may be no real point in making it a public good.&lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates. Additional benefits of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage. On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9518</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9518"/>
		<updated>2011-04-12T04:05:42Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* General Public Goods and the Internet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is mandatory. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. Any potential public good should provide some kind of performance improvement, or there may be no real point in making it a public good.&lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9517</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9517"/>
		<updated>2011-04-12T04:04:46Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* General Public Goods and the Internet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is mandatory. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. Any potential public good should provide some kind of performance improvement, or there may be no real point in making it a public good.&lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9516</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9516"/>
		<updated>2011-04-12T04:03:24Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* General Public Goods and the Internet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is mandatory. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9515</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9515"/>
		<updated>2011-04-12T04:02:56Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* General Public Goods and the Internet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet as a whole.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9514</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9514"/>
		<updated>2011-04-12T04:01:47Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* From Presentation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9513</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9513"/>
		<updated>2011-04-12T04:01:07Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Disadvantages of DNS as a Public Good */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The DNS governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9512</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9512"/>
		<updated>2011-04-12T04:00:45Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Disadvantages of DNS as a Public Good */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
As with the other potential public good, transitioning DNS into the public hands also has it&#039;s disadvantages. They are briefly described below.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9511</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9511"/>
		<updated>2011-04-12T03:59:52Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Advantages of DNS as a Public Good */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic level of service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9510</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9510"/>
		<updated>2011-04-12T03:59:37Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Advantages of DNS as a Public Good */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet in the event of a network partition.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9509</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9509"/>
		<updated>2011-04-12T03:58:26Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* DNS as a Public Good */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands. Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.   Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9508</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9508"/>
		<updated>2011-04-12T03:57:22Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* DNS Evolution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good. In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands.&lt;br /&gt;
&lt;br /&gt;
Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  &lt;br /&gt;
&lt;br /&gt;
Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9506</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9506"/>
		<updated>2011-04-12T03:55:02Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Alternative/Public */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;. Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands.&lt;br /&gt;
&lt;br /&gt;
Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  &lt;br /&gt;
&lt;br /&gt;
Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9505</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9505"/>
		<updated>2011-04-12T03:54:18Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Alternative/Public */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record, when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands.&lt;br /&gt;
&lt;br /&gt;
Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  &lt;br /&gt;
&lt;br /&gt;
Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9504</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9504"/>
		<updated>2011-04-12T03:52:20Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* DNS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers. DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands.&lt;br /&gt;
&lt;br /&gt;
Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  &lt;br /&gt;
&lt;br /&gt;
Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9503</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9503"/>
		<updated>2011-04-12T03:48:32Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Advantages of Web Caching as a Public Good */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands.&lt;br /&gt;
&lt;br /&gt;
Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  &lt;br /&gt;
&lt;br /&gt;
Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9501</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9501"/>
		<updated>2011-04-12T03:44:08Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Web Caching as a Public Good */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passive storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands.&lt;br /&gt;
&lt;br /&gt;
Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  &lt;br /&gt;
&lt;br /&gt;
Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9500</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9500"/>
		<updated>2011-04-12T03:39:02Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Web Caching */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can also be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands.&lt;br /&gt;
&lt;br /&gt;
Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  &lt;br /&gt;
&lt;br /&gt;
Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9499</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9499"/>
		<updated>2011-04-12T03:30:20Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Alternatives */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people could still turn to the ISPs if they desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The fashion in which traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands.&lt;br /&gt;
&lt;br /&gt;
Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  &lt;br /&gt;
&lt;br /&gt;
Having the DNS in public hands will ensure this reliable service.  If the public is also in control of web caching as well, it can be incrementally deployed and rolled out as a piggyback to that scenario.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9497</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9497"/>
		<updated>2011-04-12T03:25:30Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Problems */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We also don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people might desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The essential nature of DNS makes it a strong candidate to be a public good.  The way that traffic is directed on the Internet, whether user-based or application-based, requires the use of a naming service.  Without a functional service, the bulk of Internet traffic would falter as it would not know where to go.  ISPs and alternative services have provided a strong framework thus far, however the interference issues imposed by the ISPs and the privacy concerns brought up by some of the alternative services indicate the ideal scenario lies within public hands.&lt;br /&gt;
&lt;br /&gt;
Regardless of its implementation, it is a service that is required to be both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  &lt;br /&gt;
&lt;br /&gt;
Having the DNS in public hands will ensure a reliable service.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9495</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9495"/>
		<updated>2011-04-12T03:23:26Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people might desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
DNS, regardless of its implementation, needs to be a service that is both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  Overall, the necessity of the service makes it a prototypical candidate to be a public good.  It is required to access and use the Internet as we know it today.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9494</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9494"/>
		<updated>2011-04-12T03:21:09Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The Internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the Internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires to current democratic superpowers, all societies have recognized a need for and have identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the Internet to this long list. The Internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the Internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the Internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could be removed from being solely in the hands of private companies and converted to a public good.  These are the physical infrastructure of the Internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the Internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria that can be used to identify other portions of the Internet as candidates for a pubic good.&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the Internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the Internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the Internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the Internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the Internet are referred to as ISPs (Internet Service Providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the population&#039;s access to the Internet by simply forcing the ISPs to shutdown.  This is not a conclusive list of weaknesses that private ownership of the infrastructure presents.  There are a host of other issues, but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here, we provide two.  The first is to have the government legislate the behaviour of the ISPs which is currently our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have their own goals and can be influenced unduly by private industries though lobbyists or other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates its own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure would not be as fast as the incumbents, and people might desire higher speeds.  In a structure analogous to the way maintenance of our roadways is organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for Internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users&#039; home computers.  In addition, there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users&#039; laptop and Internet aware personal devices.  The nodes would use algorithms to elect members to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envisioned, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not a single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if a portion of the mesh was partitioned from the Internet, it would continue to function within its partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the Internet might be surfing or visiting low-bandwidth websites.  This could also help make Internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend Internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create a small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is a relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the Internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can return a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the Internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze Internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the Internet as a whole, it has received its fair share of research. Different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture, web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local, then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of cache that cooperates with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functionality of the Internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should Internet Service Providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the Internet who, for one reason or another, become disconnected from the Internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the Internet is willfully disconnected from the rest of the Internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the Internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the Internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the Internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the Internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all Internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the Internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp Internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the Internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the Internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
DNS, regardless of its implementation, needs to be a service that is both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  Overall, the necessity of the service makes it a prototypical candidate to be a public good.  It is required to access and use the Internet as we know it today.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the Internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the Internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owning aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end up being very expensive. In general, novel aspects of the Internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in its reach, then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be trialed in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the Internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the Internet is becoming a ubiquitous entity in modern day society, and access to it is becoming more and more essential as time goes by. It is due to this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unlikely to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes most sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9476</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9476"/>
		<updated>2011-04-12T02:13:37Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* DNS Evolution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all within the society.  The internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the access and use of the internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) the viability and benefits of this conversion will be illustrated.  Finally, criteria to define other candidates for public goods will be established.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires and to current democratic superpowers, all societies have recognized a need for and identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the internet to this long list. The internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could removed from being solely in the hands of private companies and converted to the public good.  These are the physical infrastructure of the internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria than can be used to identify other portions of the internet as candidates for the pubic good.&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
&lt;br /&gt;
Generally speaking, a public good is:&lt;br /&gt;
* an entity deemed to be essential, beneficial and non-excludable to individuals and the public as a whole&lt;br /&gt;
* provided for users collectively, where the use by one does not preclude the use of the good by others &lt;br /&gt;
* managed completely by the public, who has overall control&lt;br /&gt;
* an entity where the publics best interest is paramount over private concerns&lt;br /&gt;
* ie. roads, parks, military, utilities, etc.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The Internet as a Public Good&lt;br /&gt;
* Universal access to the Internet will be essential &lt;br /&gt;
* The Internet as a whole is too large to effectively manage&lt;br /&gt;
* Certain aspects of the Internet should not be publicly controlled (ie. business)&lt;br /&gt;
&lt;br /&gt;
Problem definition:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the internet are referred to as ISPs (internet service providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the populations&#039; access to the internet by simply forcing the ISPs to shutdown.  These are not a conclusive list of weaknesses private ownership of the infrastructure presents, there are a host of others but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here we provide two.  The first is to have the government legislate the behaviour of the ISPs. Currently this is our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have there own goals and can be influenced unduly by private industries though lobbyists and other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates it&#039;s own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure wouldn&#039;t be as fast as the incumbents and people might desire higher speeds.  In a structure analogous to the way maintenance of our roadways are organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users home computers.  In addition there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users laptop and internet aware personal devices.  The nodes would use algorithms to elect member to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envision, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if an portion of the mesh was partitioned from the Internet, it would continue to function within it&#039;s partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the internet might be surfing or visiting low-bandwidth websites.  This could also help make internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can returned a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the internet as a whole, it has received it&#039;s fair share of research. Many different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data is travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of caches that cooperate with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functioning of the internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should internet service providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the internet who, for one reason or another, become disconnected from the internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the internet is willfully disconnected from the rest of the internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories of options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Michael Geist - Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Ars Technica - Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Slashdot.org - Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project&amp;lt;ref name=&amp;quot;GDNS&amp;quot;&amp;gt;Google Public DNS[http://code.google.com/speed/public-dns/ link]&amp;lt;/ref&amp;gt; or OpenDNS&amp;lt;ref name=&amp;quot;openDNS&amp;quot;&amp;gt;OpenDNS [http://www.opendns.com/ link]&amp;lt;/ref&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  In the case of Google, there is reason to consider how Google will end up treating and using the information it gains access to, even given a clean track record when it comes to providing free applications and services to users.&amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;  Google would now have deep access to user behaviour in being able to determine every single thing that is being sought out.&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Resource and maintenance issues need to be considered as well for any &amp;quot;community-based&amp;quot; project.  As strong as user generated communities can be at providing and generating ample content, it is difficult to imagine this large responsibility lying on the back of these &amp;quot;good Samaritans.&amp;quot;  The configuration may also demand a fair bit of the end user as local DNS settings may have to be changed frequently to keep up with changes as they occur.&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing power will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Caching is a major aspect in the improvements that lay ahead of DNS performance.  DNS caching has become a major aspect to improving performance by reducing latency much like web caching provides for regular content browsing.  DNS caches could be contained within the Web Caching Schemes presented in the previous section.  The hierarchical structure described can function equally well for DNS purposes.  Ideally, the DNS cache would essentially piggyback at each level of the web cache, thereby locally providing content in a somewhat democratic fashion; users are able to dictate what sites are loaded quickly simply by visiting them.&lt;br /&gt;
&lt;br /&gt;
Research is being done on improving the performance, attack resiliency, bottleneck prevention and update propagation issues that hinder the legacy DNS deployment, even if it is aided by caching.  One candidate as a next generation naming system, actively being researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;Venugopalan Ramasubramanian and Emin Gün Sirer. 2004. The design and implementation of a next generation name service for the internet. SIGCOMM Comput. Commun. Rev. 34, 4 (August 2004), 331-342. DOI=10.1145/1030194.1015504 http://doi.acm.org/10.1145/1030194.1015504. Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf link]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
In essence, the static DNS tree is decentralized and distributed across the network with this implementation.  This removes the issues of bottlenecks and increases resiliency against attack as the single points of failure have been removed.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
DNS, regardless of its implementation, needs to be a service that is both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  Overall, the necessity of the service makes it a prototypical candidate to be a public good.  It is required to access and use the internet as we know it today.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Dual Implementation with Web Caching&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A DNS cache can be implemented hierarchically along with a public web cache to capitalize further on the benefits of web caching, namely reduction of latency, wasted bandwidth and enhanced robustness and reliability. The cache is also democratized as users only need to user the sites as they see fit and they will appear in the cache for faster lookup and content retrieval.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owing aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end be very expensive. In general, novel aspects of the internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in it&#039;s reach then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes these are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be tried in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the internet is becoming a ubiquitous entity in modern day society, and access to it for is becoming more and more essential as time goes by. It is because of this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unable to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes more sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9439</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9439"/>
		<updated>2011-04-11T22:11:52Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* General Public Goods and the Internet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all.  The internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it to operate should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) we illustrate the viability and benefits of this conversion.  Finally we establish criteria with which to define other candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires and to current democratic superpowers, all societies have recognized a need for and identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the internet to this long list. The internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could removed from being solely in the hands of private companies and converted to the public good.  These are the physical infrastructure of the internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria than can be used to identify other portions of the internet as candidates for the pubic good.&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
&lt;br /&gt;
Generally speaking, a public good is:&lt;br /&gt;
* an entity deemed to be essential, beneficial and non-excludable to individuals and the public as a whole&lt;br /&gt;
* provided for users collectively, where the use by one does not preclude the use of the good by others &lt;br /&gt;
* managed completely by the public, who has overall control&lt;br /&gt;
* an entity where the publics best interest is paramount over private concerns&lt;br /&gt;
* ie. roads, parks, military, utilities, etc.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The Internet as a Public Good&lt;br /&gt;
* Universal access to the Internet will be essential &lt;br /&gt;
* The Internet as a whole is too large to effectively manage&lt;br /&gt;
* Certain aspects of the Internet should not be publicly controlled (ie. business)&lt;br /&gt;
&lt;br /&gt;
Problem definition:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the internet are referred to as ISPs (internet service providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the populations&#039; access to the internet by simply forcing the ISPs to shutdown.  These are not a conclusive list of weaknesses private ownership of the infrastructure presents, there are a host of others but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here we provide two.  The first is to have the government legislate the behaviour of the ISPs. Currently this is our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have there own goals and can be influenced unduly by private industries though lobbyists and other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates it&#039;s own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure wouldn&#039;t be as fast as the incumbents and people might desire higher speeds.  In a structure analogous to the way maintenance of our roadways are organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users home computers.  In addition there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users laptop and internet aware personal devices.  The nodes would use algorithms to elect member to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envision, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if an portion of the mesh was partitioned from the Internet, it would continue to function within it&#039;s partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the internet might be surfing or visiting low-bandwidth websites.  This could also help make internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can returned a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the internet as a whole, it has received it&#039;s fair share of research. Many different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data is travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of caches that cooperate with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functioning of the internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should internet service providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the internet who, for one reason or another, become disconnected from the internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the internet is willfully disconnected from the rest of the internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project or OpenDNS &amp;lt;probably a good idea to cite some kind of reference for these&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  &amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/ link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Research is being done on improving the indicated aspects of DNS.  One candidate as a next generation naming system that is being actively researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;The Design and Implementation of a Next Generation Name Service for the Internet - Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
DNS, regardless of its implementation, needs to be a service that is both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  Overall, the necessity of the service makes it a prototypical candidate to be a public good.  It is required to access and use the internet as we know it today.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owing aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end be very expensive. In general, novel aspects of the internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in it&#039;s reach then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes these are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be tried in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significantly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
&lt;br /&gt;
On top of the cumulative benefits that public goods on the internet provide, they would also allow for new technologies to emerge. Putting these resources into the publics&#039; hands would allow everyday people to have easier access to them then ever before. One example give was with the full web application caching discussed earlier. This type of open access to computing resources would allow people with great ideas to implement, test and deploy them to the world wide web in a way that was never possible before.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the internet is becoming a ubiquitous entity in modern day society, and access to it for is becoming more and more essential as time goes by. It is because of this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unable to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes more sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9427</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9427"/>
		<updated>2011-04-11T20:20:53Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* General Public Goods and the Internet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all.  The internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it to operate should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) we illustrate the viability and benefits of this conversion.  Finally we establish criteria with which to define other candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires and to current democratic superpowers, all societies have recognized a need for and identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the internet to this long list. The internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could removed from being solely in the hands of private companies and converted to the public good.  These are the physical infrastructure of the internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria than can be used to identify other portions of the internet as candidates for the pubic good.&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
&lt;br /&gt;
Generally speaking, a public good is:&lt;br /&gt;
* an entity deemed to be essential, beneficial and non-excludable to individuals and the public as a whole&lt;br /&gt;
* provided for users collectively, where the use by one does not preclude the use of the good by others &lt;br /&gt;
* managed completely by the public, who has overall control&lt;br /&gt;
* an entity where the publics best interest is paramount over private concerns&lt;br /&gt;
* ie. roads, parks, military, utilities, etc.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The Internet as a Public Good&lt;br /&gt;
* Universal access to the Internet will be essential &lt;br /&gt;
* The Internet as a whole is too large to effectively manage&lt;br /&gt;
* Certain aspects of the Internet should not be publicly controlled (ie. business)&lt;br /&gt;
&lt;br /&gt;
Problem definition:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the internet are referred to as ISPs (internet service providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the populations&#039; access to the internet by simply forcing the ISPs to shutdown.  These are not a conclusive list of weaknesses private ownership of the infrastructure presents, there are a host of others but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here we provide two.  The first is to have the government legislate the behaviour of the ISPs. Currently this is our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have there own goals and can be influenced unduly by private industries though lobbyists and other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates it&#039;s own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure wouldn&#039;t be as fast as the incumbents and people might desire higher speeds.  In a structure analogous to the way maintenance of our roadways are organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users home computers.  In addition there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users laptop and internet aware personal devices.  The nodes would use algorithms to elect member to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envision, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if an portion of the mesh was partitioned from the Internet, it would continue to function within it&#039;s partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the internet might be surfing or visiting low-bandwidth websites.  This could also help make internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can returned a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the internet as a whole, it has received it&#039;s fair share of research. Many different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data is travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of caches that cooperate with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functioning of the internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should internet service providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the internet who, for one reason or another, become disconnected from the internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the internet is willfully disconnected from the rest of the internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project or OpenDNS &amp;lt;probably a good idea to cite some kind of reference for these&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  &amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Research is being done on improving the indicated aspects of DNS.  One candidate as a next generation naming system that is being actively researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;The Design and Implementation of a Next Generation Name Service for the Internet - Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
DNS, regardless of its implementation, needs to be a service that is both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  Overall, the necessity of the service makes it a prototypical candidate to be a public good.  It is required to access and use the internet as we know it today.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owing aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end be very expensive. In general, novel aspects of the internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in it&#039;s reach then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes these are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be tried in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significanly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the internet is becoming a ubiquitous entity in modern day society, and access to it for is becoming more and more essential as time goes by. It is because of this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unable to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes more sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9425</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9425"/>
		<updated>2011-04-11T20:20:42Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* General Public Goods and the Internet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all.  The internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it to operate should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) we illustrate the viability and benefits of this conversion.  Finally we establish criteria with which to define other candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires and to current democratic superpowers, all societies have recognized a need for and identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the internet to this long list. The internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could removed from being solely in the hands of private companies and converted to the public good.  These are the physical infrastructure of the internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria than can be used to identify other portions of the internet as candidates for the pubic good.&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
&lt;br /&gt;
Generally speaking, a public good is:&lt;br /&gt;
* an entity deemed to be essential, beneficial and non-excludable to individuals and the public as a whole&lt;br /&gt;
* provided for users collectively, where the use by one does not preclude the use of the good by others &lt;br /&gt;
* managed completely by the public, who has overall control&lt;br /&gt;
* an entity where the publics best interest is paramount over private concerns&lt;br /&gt;
* ie. roads, parks, military, utilities, etc.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The Internet as a Public Good&lt;br /&gt;
* Universal access to the Internet will be essential &lt;br /&gt;
* The Internet as a whole is too large to effectively manage&lt;br /&gt;
* Certain aspects of the Internet should not be publicly controlled (ie. business)&lt;br /&gt;
&lt;br /&gt;
Problem definition:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the internet are referred to as ISPs (internet service providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the populations&#039; access to the internet by simply forcing the ISPs to shutdown.  These are not a conclusive list of weaknesses private ownership of the infrastructure presents, there are a host of others but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here we provide two.  The first is to have the government legislate the behaviour of the ISPs. Currently this is our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have there own goals and can be influenced unduly by private industries though lobbyists and other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates it&#039;s own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure wouldn&#039;t be as fast as the incumbents and people might desire higher speeds.  In a structure analogous to the way maintenance of our roadways are organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users home computers.  In addition there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users laptop and internet aware personal devices.  The nodes would use algorithms to elect member to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envision, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if an portion of the mesh was partitioned from the Internet, it would continue to function within it&#039;s partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the internet might be surfing or visiting low-bandwidth websites.  This could also help make internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can returned a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the internet as a whole, it has received it&#039;s fair share of research. Many different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data is travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of caches that cooperate with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functioning of the internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should internet service providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the internet who, for one reason or another, become disconnected from the internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the internet is willfully disconnected from the rest of the internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project or OpenDNS &amp;lt;probably a good idea to cite some kind of reference for these&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  &amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Research is being done on improving the indicated aspects of DNS.  One candidate as a next generation naming system that is being actively researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;The Design and Implementation of a Next Generation Name Service for the Internet - Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
DNS, regardless of its implementation, needs to be a service that is both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  Overall, the necessity of the service makes it a prototypical candidate to be a public good.  It is required to access and use the internet as we know it today.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owing aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end be very expensive. In general, novel aspects of the internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in it&#039;s reach then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes these are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be tried in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significanly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the internet is becoming a ubiquitous entity in modern day society, and access to it for is becoming more and more essential as time goes by. It is because of this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unable to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes more sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9424</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9424"/>
		<updated>2011-04-11T20:20:29Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* General Public Goods and the Internet */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all.  The internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it to operate should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) we illustrate the viability and benefits of this conversion.  Finally we establish criteria with which to define other candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires and to current democratic superpowers, all societies have recognized a need for and identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the internet to this long list. The internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could removed from being solely in the hands of private companies and converted to the public good.  These are the physical infrastructure of the internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria than can be used to identify other portions of the internet as candidates for the pubic good.&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
&lt;br /&gt;
Generally speaking, a public good is:&lt;br /&gt;
* an entity deemed to be essential, beneficial and non-excludable to individuals and the public as a whole&lt;br /&gt;
* provided for users collectively, where the use by one does not preclude the use of the good by others &lt;br /&gt;
* managed completely by the public, who has overall control&lt;br /&gt;
* an entity where the publics best interest is paramount over private concerns&lt;br /&gt;
* ie. roads, parks, military, utilities, etc.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The Internet as a Public Good&lt;br /&gt;
* Universal access to the Internet will be essential &lt;br /&gt;
* The Internet as a whole is too large to effectively manage&lt;br /&gt;
* Certain aspects of the Internet should not be publicly controlled (ie. business)&lt;br /&gt;
&lt;br /&gt;
Problem definition:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the internet are referred to as ISPs (internet service providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the populations&#039; access to the internet by simply forcing the ISPs to shutdown.  These are not a conclusive list of weaknesses private ownership of the infrastructure presents, there are a host of others but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here we provide two.  The first is to have the government legislate the behaviour of the ISPs. Currently this is our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have there own goals and can be influenced unduly by private industries though lobbyists and other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates it&#039;s own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure wouldn&#039;t be as fast as the incumbents and people might desire higher speeds.  In a structure analogous to the way maintenance of our roadways are organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users home computers.  In addition there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users laptop and internet aware personal devices.  The nodes would use algorithms to elect member to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envision, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if an portion of the mesh was partitioned from the Internet, it would continue to function within it&#039;s partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the internet might be surfing or visiting low-bandwidth websites.  This could also help make internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can returned a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the internet as a whole, it has received it&#039;s fair share of research. Many different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data is travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of caches that cooperate with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functioning of the internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should internet service providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the internet who, for one reason or another, become disconnected from the internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the internet is willfully disconnected from the rest of the internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project or OpenDNS &amp;lt;probably a good idea to cite some kind of reference for these&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  &amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Research is being done on improving the indicated aspects of DNS.  One candidate as a next generation naming system that is being actively researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;The Design and Implementation of a Next Generation Name Service for the Internet - Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
DNS, regardless of its implementation, needs to be a service that is both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  Overall, the necessity of the service makes it a prototypical candidate to be a public good.  It is required to access and use the internet as we know it today.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owing aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end be very expensive. In general, novel aspects of the internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in it&#039;s reach then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes these are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be tried in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
&lt;br /&gt;
An additional benefit of bringing aspects of the Internet under the public&#039;s control is the cumulative beneficial effects that would occur. Although proposed public goods would have to adhere to the criteria listed above, they would often do so in different ways. For instance, the basic level of service provided by the physical infrastructure as a public good is significanly different than the basic level of service provided by the proposed web caching scheme. On top of this, the performance improvements provided by one public good would most most likely increase due to the performance gains introduced by other, new public goods. Generally speaking, the more aspects of the Internet that fulfill the above criteria that are converted into public goods will only increase the more we will notice each individual advantage.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the internet is becoming a ubiquitous entity in modern day society, and access to it for is becoming more and more essential as time goes by. It is because of this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unable to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes more sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9398</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9398"/>
		<updated>2011-04-11T19:33:32Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* DNS as a Public Good */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all.  The internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it to operate should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) we illustrate the viability and benefits of this conversion.  Finally we establish criteria with which to define other candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires and to current democratic superpowers, all societies have recognized a need for and identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the internet to this long list. The internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could removed from being solely in the hands of private companies and converted to the public good.  These are the physical infrastructure of the internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria than can be used to identify other portions of the internet as candidates for the pubic good.&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
&lt;br /&gt;
Generally speaking, a public good is:&lt;br /&gt;
* an entity deemed to be essential, beneficial and non-excludable to individuals and the public as a whole&lt;br /&gt;
* provided for users collectively, where the use by one does not preclude the use of the good by others &lt;br /&gt;
* managed completely by the public, who has overall control&lt;br /&gt;
* an entity where the publics best interest is paramount over private concerns&lt;br /&gt;
* ie. roads, parks, military, utilities, etc.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The Internet as a Public Good&lt;br /&gt;
* Universal access to the Internet will be essential &lt;br /&gt;
* The Internet as a whole is too large to effectively manage&lt;br /&gt;
* Certain aspects of the Internet should not be publicly controlled (ie. business)&lt;br /&gt;
&lt;br /&gt;
Problem definition:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the internet are referred to as ISPs (internet service providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the populations&#039; access to the internet by simply forcing the ISPs to shutdown.  These are not a conclusive list of weaknesses private ownership of the infrastructure presents, there are a host of others but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here we provide two.  The first is to have the government legislate the behaviour of the ISPs. Currently this is our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have there own goals and can be influenced unduly by private industries though lobbyists and other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates it&#039;s own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure wouldn&#039;t be as fast as the incumbents and people might desire higher speeds.  In a structure analogous to the way maintenance of our roadways are organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users home computers.  In addition there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users laptop and internet aware personal devices.  The nodes would use algorithms to elect member to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envision, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if an portion of the mesh was partitioned from the Internet, it would continue to function within it&#039;s partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the internet might be surfing or visiting low-bandwidth websites.  This could also help make internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can returned a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the internet as a whole, it has received it&#039;s fair share of research. Many different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data is travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of caches that cooperate with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functioning of the internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should internet service providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the internet who, for one reason or another, become disconnected from the internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the internet is willfully disconnected from the rest of the internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project or OpenDNS &amp;lt;probably a good idea to cite some kind of reference for these&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  &amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Research is being done on improving the indicated aspects of DNS.  One candidate as a next generation naming system that is being actively researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;The Design and Implementation of a Next Generation Name Service for the Internet - Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
DNS, regardless of its implementation, needs to be a service that is both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  Overall, the necessity of the service makes it a prototypical candidate to be a public good.  It is required to access and use the internet as we know it today.&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owing aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end be very expensive. In general, novel aspects of the internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in it&#039;s reach then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes these are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be tried in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the internet is becoming a ubiquitous entity in modern day society, and access to it for is becoming more and more essential as time goes by. It is because of this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unable to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes more sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9396</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9396"/>
		<updated>2011-04-11T19:32:39Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* DNS Evolution */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all.  The internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it to operate should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) we illustrate the viability and benefits of this conversion.  Finally we establish criteria with which to define other candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires and to current democratic superpowers, all societies have recognized a need for and identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the internet to this long list. The internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could removed from being solely in the hands of private companies and converted to the public good.  These are the physical infrastructure of the internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria than can be used to identify other portions of the internet as candidates for the pubic good.&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
&lt;br /&gt;
Generally speaking, a public good is:&lt;br /&gt;
* an entity deemed to be essential, beneficial and non-excludable to individuals and the public as a whole&lt;br /&gt;
* provided for users collectively, where the use by one does not preclude the use of the good by others &lt;br /&gt;
* managed completely by the public, who has overall control&lt;br /&gt;
* an entity where the publics best interest is paramount over private concerns&lt;br /&gt;
* ie. roads, parks, military, utilities, etc.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The Internet as a Public Good&lt;br /&gt;
* Universal access to the Internet will be essential &lt;br /&gt;
* The Internet as a whole is too large to effectively manage&lt;br /&gt;
* Certain aspects of the Internet should not be publicly controlled (ie. business)&lt;br /&gt;
&lt;br /&gt;
Problem definition:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the internet are referred to as ISPs (internet service providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the populations&#039; access to the internet by simply forcing the ISPs to shutdown.  These are not a conclusive list of weaknesses private ownership of the infrastructure presents, there are a host of others but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here we provide two.  The first is to have the government legislate the behaviour of the ISPs. Currently this is our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have there own goals and can be influenced unduly by private industries though lobbyists and other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates it&#039;s own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure wouldn&#039;t be as fast as the incumbents and people might desire higher speeds.  In a structure analogous to the way maintenance of our roadways are organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users home computers.  In addition there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users laptop and internet aware personal devices.  The nodes would use algorithms to elect member to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envision, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if an portion of the mesh was partitioned from the Internet, it would continue to function within it&#039;s partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the internet might be surfing or visiting low-bandwidth websites.  This could also help make internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can returned a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the internet as a whole, it has received it&#039;s fair share of research. Many different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data is travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of caches that cooperate with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functioning of the internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should internet service providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the internet who, for one reason or another, become disconnected from the internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the internet is willfully disconnected from the rest of the internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project or OpenDNS &amp;lt;probably a good idea to cite some kind of reference for these&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  &amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Research is being done on improving the indicated aspects of DNS.  One candidate as a next generation naming system that is being actively researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;The Design and Implementation of a Next Generation Name Service for the Internet - Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf]&amp;lt;/ref&amp;gt;. Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the internet. Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
DNS, regardless of its implementation, needs to be a service that is both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  Overall, the necessity of the service makes it a prototypical candidate to be a public good.  It is required to access and use the internet as we do today.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owing aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end be very expensive. In general, novel aspects of the internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in it&#039;s reach then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes these are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be tried in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the internet is becoming a ubiquitous entity in modern day society, and access to it for is becoming more and more essential as time goes by. It is because of this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unable to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes more sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9393</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9393"/>
		<updated>2011-04-11T19:31:35Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* Implementation Issues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all.  The internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it to operate should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) we illustrate the viability and benefits of this conversion.  Finally we establish criteria with which to define other candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires and to current democratic superpowers, all societies have recognized a need for and identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the internet to this long list. The internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could removed from being solely in the hands of private companies and converted to the public good.  These are the physical infrastructure of the internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria than can be used to identify other portions of the internet as candidates for the pubic good.&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
&lt;br /&gt;
Generally speaking, a public good is:&lt;br /&gt;
* an entity deemed to be essential, beneficial and non-excludable to individuals and the public as a whole&lt;br /&gt;
* provided for users collectively, where the use by one does not preclude the use of the good by others &lt;br /&gt;
* managed completely by the public, who has overall control&lt;br /&gt;
* an entity where the publics best interest is paramount over private concerns&lt;br /&gt;
* ie. roads, parks, military, utilities, etc.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The Internet as a Public Good&lt;br /&gt;
* Universal access to the Internet will be essential &lt;br /&gt;
* The Internet as a whole is too large to effectively manage&lt;br /&gt;
* Certain aspects of the Internet should not be publicly controlled (ie. business)&lt;br /&gt;
&lt;br /&gt;
Problem definition:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the internet are referred to as ISPs (internet service providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the populations&#039; access to the internet by simply forcing the ISPs to shutdown.  These are not a conclusive list of weaknesses private ownership of the infrastructure presents, there are a host of others but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here we provide two.  The first is to have the government legislate the behaviour of the ISPs. Currently this is our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have there own goals and can be influenced unduly by private industries though lobbyists and other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates it&#039;s own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure wouldn&#039;t be as fast as the incumbents and people might desire higher speeds.  In a structure analogous to the way maintenance of our roadways are organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users home computers.  In addition there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users laptop and internet aware personal devices.  The nodes would use algorithms to elect member to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envision, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if an portion of the mesh was partitioned from the Internet, it would continue to function within it&#039;s partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the internet might be surfing or visiting low-bandwidth websites.  This could also help make internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can returned a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the internet as a whole, it has received it&#039;s fair share of research. Many different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data is travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of caches that cooperate with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functioning of the internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should internet service providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the internet who, for one reason or another, become disconnected from the internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the internet is willfully disconnected from the rest of the internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project or OpenDNS &amp;lt;probably a good idea to cite some kind of reference for these&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  &amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementations.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a low number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the internet.  DNS servers around the world have a static schedule for updating their records and, when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp internet traffic at any time.  Measures are in place to prevent this kind of attack, however, like anything security based, it requires constant monitoring and changes in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Research is being done on improving the indicated aspects of DNS.  One candidate as a next generation naming system that is being actively researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;The Design and Implementation of a Next Generation Name Service for the Internet - Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the internet.&lt;br /&gt;
&lt;br /&gt;
Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
DNS, regardless of its implementation, needs to be a service that is both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  Overall, the necessity of the service makes it a prototypical candidate to be a public good.  It is required to access and use the internet as we do today.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owing aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end be very expensive. In general, novel aspects of the internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in it&#039;s reach then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes these are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be tried in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the internet is becoming a ubiquitous entity in modern day society, and access to it for is becoming more and more essential as time goes by. It is because of this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unable to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes more sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9391</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9391"/>
		<updated>2011-04-11T19:25:39Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* DNS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all.  The internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it to operate should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) we illustrate the viability and benefits of this conversion.  Finally we establish criteria with which to define other candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires and to current democratic superpowers, all societies have recognized a need for and identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the internet to this long list. The internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could removed from being solely in the hands of private companies and converted to the public good.  These are the physical infrastructure of the internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria than can be used to identify other portions of the internet as candidates for the pubic good.&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
&lt;br /&gt;
Generally speaking, a public good is:&lt;br /&gt;
* an entity deemed to be essential, beneficial and non-excludable to individuals and the public as a whole&lt;br /&gt;
* provided for users collectively, where the use by one does not preclude the use of the good by others &lt;br /&gt;
* managed completely by the public, who has overall control&lt;br /&gt;
* an entity where the publics best interest is paramount over private concerns&lt;br /&gt;
* ie. roads, parks, military, utilities, etc.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The Internet as a Public Good&lt;br /&gt;
* Universal access to the Internet will be essential &lt;br /&gt;
* The Internet as a whole is too large to effectively manage&lt;br /&gt;
* Certain aspects of the Internet should not be publicly controlled (ie. business)&lt;br /&gt;
&lt;br /&gt;
Problem definition:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the internet are referred to as ISPs (internet service providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the populations&#039; access to the internet by simply forcing the ISPs to shutdown.  These are not a conclusive list of weaknesses private ownership of the infrastructure presents, there are a host of others but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here we provide two.  The first is to have the government legislate the behaviour of the ISPs. Currently this is our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have there own goals and can be influenced unduly by private industries though lobbyists and other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates it&#039;s own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure wouldn&#039;t be as fast as the incumbents and people might desire higher speeds.  In a structure analogous to the way maintenance of our roadways are organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users home computers.  In addition there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users laptop and internet aware personal devices.  The nodes would use algorithms to elect member to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envision, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if an portion of the mesh was partitioned from the Internet, it would continue to function within it&#039;s partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the internet might be surfing or visiting low-bandwidth websites.  This could also help make internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can returned a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the internet as a whole, it has received it&#039;s fair share of research. Many different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data is travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of caches that cooperate with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functioning of the internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should internet service providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the internet who, for one reason or another, become disconnected from the internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the internet is willfully disconnected from the rest of the internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the users that all internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existent URL.  This can be seen as helpful (in the event of typos) or a nuisance (suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgeable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project or OpenDNS &amp;lt;probably a good idea to cite some kind of reference for these&amp;gt;. This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good Samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  &amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementation.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a love number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the internet.  DNS servers around the world have a static schedule for which their records are updated, and when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp internet traffic at any time.  Measures are in place to prevent this kind of measure, however, like anything security based, it requires constant monitoring and change in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Research is being done on improving the indicated aspects of DNS.  One candidate as a next generation naming system that is being actively researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;The Design and Implementation of a Next Generation Name Service for the Internet - Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the internet.&lt;br /&gt;
&lt;br /&gt;
Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
DNS, regardless of its implementation, needs to be a service that is both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  Overall, the necessity of the service makes it a prototypical candidate to be a public good.  It is required to access and use the internet as we do today.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owing aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end be very expensive. In general, novel aspects of the internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in it&#039;s reach then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes these are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be tried in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the internet is becoming a ubiquitous entity in modern day society, and access to it for is becoming more and more essential as time goes by. It is because of this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unable to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes more sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
	<entry>
		<id>https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9390</id>
		<title>DistOS-2011W Public Goods</title>
		<link rel="alternate" type="text/html" href="https://homeostasis.scs.carleton.ca/wiki/index.php?title=DistOS-2011W_Public_Goods&amp;diff=9390"/>
		<updated>2011-04-11T19:23:05Z</updated>

		<summary type="html">&lt;p&gt;Aschoenr: /* DNS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Abstract=&lt;br /&gt;
Public goods are resources that are held in common for the benefit of all.  The internet is now such an important piece of our economy, culture, communication and entertainment that the technologies that enable it to operate should be placed in trust for benefit of the entire population.    In this paper we establish a model to help define public goods as they relate to the internet.  Using three examples of public goods candidates (physical infrastructure, web caching and DNS) we illustrate the viability and benefits of this conversion.  Finally we establish criteria with which to define other candidates for public goods.&lt;br /&gt;
&lt;br /&gt;
=Introduction=&lt;br /&gt;
As societies have developed, communities have recognized the need for public goods.  From simple shepherds to colonial empires and to current democratic superpowers, all societies have recognized a need for and identified public goods which can be defined as “resources that are held in common in the sense than no one exercises any property right with respect to these resources or the exclusive right to choose whether the resource is made available to others”&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;&amp;gt;David Johnson, Kobus Roux. Building Rural Wireless Networks: Lessons Learnt and Future Directions. WINS-DR, 5. September 2008. DOI=10.1145/1410064.1410068 [http://doi.acm.org/10.1145/1410064.1410068 link]&amp;lt;/ref&amp;gt;. These public goods provide a noticeable benefit to all of the individuals composing the society. Generally speaking, these entities are deemed to be essential, beneficial and non-excludable to individuals and the public as a whole. Roads, parks, military, police, water and fresh air are all example of public goods. We propose to add the internet to this long list. The internet is becoming a vital tool in nearly everyone&#039;s life, playing a massive part in modern business, education, communication and entertainment. As we move into the future, access to the internet for individuals worldwide is quickly becoming essential. While it might be nice to identify the Internet as a public good, identifying how to convert it to one is a more difficult process.  The Internet is a system of heterogenous computers, hardware and runs using an even more diverse set of protocols and software. This system is much too large to be effectively managed by a single governing body and there are certain aspects of the internet (ie. business entities) that should not be publicly controlled. With this in mind, we have tried to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* Which aspects of the Internet should be controlled by the public?&lt;br /&gt;
* How are these aspects identified?&lt;br /&gt;
* Are these aspects absolutely fundamental to the functionality of the Internet? &lt;br /&gt;
* What are the problems with how these aspects are controlled today?&lt;br /&gt;
* What are the advantages and disadvantages of having this aspect of the Internet as a public good?&lt;br /&gt;
&lt;br /&gt;
We have identified three key pieces of the Internet that are excellent candidates to become public goods. We propose how these aspects could removed from being solely in the hands of private companies and converted to the public good.  These are the physical infrastructure of the internet, web caching and DNS.  We chose these three pieces based on them being absolutely essential to the current operation of the internet. After doing this, and examining the benefits of converting these three pieces of the Internet into public goods, we added another key question to be answered to the list above:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
Upon analysis, common benefits were identified and we believe these can provide a base criteria than can be used to identify other portions of the internet as candidates for the pubic good.&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
&lt;br /&gt;
Generally speaking, a public good is:&lt;br /&gt;
* an entity deemed to be essential, beneficial and non-excludable to individuals and the public as a whole&lt;br /&gt;
* provided for users collectively, where the use by one does not preclude the use of the good by others &lt;br /&gt;
* managed completely by the public, who has overall control&lt;br /&gt;
* an entity where the publics best interest is paramount over private concerns&lt;br /&gt;
* ie. roads, parks, military, utilities, etc.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
The Internet as a Public Good&lt;br /&gt;
* Universal access to the Internet will be essential &lt;br /&gt;
* The Internet as a whole is too large to effectively manage&lt;br /&gt;
* Certain aspects of the Internet should not be publicly controlled (ie. business)&lt;br /&gt;
&lt;br /&gt;
Problem definition:&lt;br /&gt;
&lt;br /&gt;
* What qualities do these potential public goods have in common?&lt;br /&gt;
&lt;br /&gt;
=Candidates for Public Goods=&lt;br /&gt;
In the following sections, a few key examples of aspects of the internet that would be excellent candidates for becoming public goods will be presented.&lt;br /&gt;
&lt;br /&gt;
==Physical Infrastructure==&lt;br /&gt;
As the ubiquitous nature of the Internet has unfolded, people&#039;s dependence on it has increased.  While the Internet&#039;s roots exist in a serendipitous alignment of academic and military interests, the Internet quickly became a provider of entertainment and communication.  Today the internet has enmeshed itself in the fabric of society and is a part of many people&#039;s daily ritual.  For many, the internet is as important as roads for conducting their daily activities, yet while roads are not privately owned the infrastructure of the internet lies in the hands of private companies.&lt;br /&gt;
&lt;br /&gt;
The private companies that currently own the infrastructure of the internet are referred to as ISPs (internet service providers). These are the entities that any user must pay to gain access to the Internet currently. For the purposes of this paper, we will consider the servers, routers, switches, hubs, wires, fiber, and all other hardware that exists outside of the consumers own networks to be the infrastructure of the Internet and will not differentiate between these technologies.&lt;br /&gt;
&lt;br /&gt;
===Problems===&lt;br /&gt;
A variety of problems arise with the ISPs owning the infrastructure of the Internet.  These companies make decisions based on their own profit margins and with little regard for the public good.  One problem currently experienced is packet shaping&amp;lt;ref name=&amp;quot;wikipediaTrafficShaping&amp;quot;&amp;gt; Wikipedia/Traffic Shaping. visited April 2011. last modified March 2011. [http://en.wikipedia.org/wiki/Traffic_shaping link]&amp;lt;/ref&amp;gt;.  Packet shaping is currently used by ISPs to control the speed of certain kinds of traffic, thus avoiding congestion. It does this by assigning priorities to packets using various criteria decided by the ISPs.  While it is good for everyone with the technology implemented by private companies, we don&#039;t know what protocols are limited, by how much and if it&#039;s only done at peak times.  We don&#039;t know if this technology is deployed to just decrease the bandwidth consumption so the company can avoid upgrading the infrastructure.  Another potential problem is the ISPs giving preferential treatment to websites or webservices that have paid the ISP.  This could be implemented by slowing our disallowing traffic to competitors.  While this hasn&#039;t been proposed by ISPs, it has been fought against by the movement known as Net Neutrality&amp;lt;ref name=&amp;quot;wikipediaNetNeutrality&amp;quot;&amp;gt; Web Wikipedia/NetNeutrality. visited April 2011. last modified April 2011. [http://en.wikipedia.org/wiki/Net_neutrality link]&amp;lt;/ref&amp;gt;.  More recently we have become acutely aware that ISPs provide convenient choke points.  In Egypt during an uprising, the incumbent government shutdown the populations&#039; access to the internet by simply forcing the ISPs to shutdown.  These are not a conclusive list of weaknesses private ownership of the infrastructure presents, there are a host of others but these few are cause for concern.&lt;br /&gt;
&lt;br /&gt;
===Alternatives===&lt;br /&gt;
With the current importance of the Internet, an alternative to private ownership of the Internet&#039;s infrastructure needs to be found. Here we provide two.  The first is to have the government legislate the behaviour of the ISPs. Currently this is our only mechanism.  This would transform the infrastructure into a virtual public good by legislating the behaviours of the ISPs to be in accordance with the best interest of the public.  There are problems in that politician&#039;s have there own goals and can be influenced unduly by private industries though lobbyists and other means.  Additionally, the government is slow to act and this could allow disruptive and unfair behaviour by the ISPs to affect the population until the government passes a law preventing the current behaviour.  These reasons make this option less than compelling.&lt;br /&gt;
The other option is for the public to actually own the infrastructure of the Internet.  We are not proposing that the government take the infrastructure from the ISPs, but that it creates it&#039;s own with the help of the people.  This new infrastructure would coexist with the current ISPs and operate in parallel.  Conceivably the speed of this new infrastructure wouldn&#039;t be as fast as the incumbents and people might desire higher speeds.  In a structure analogous to the way maintenance of our roadways are organized, this would be adopted at all levels of government (municipal, provincial and federal).  This stratification allows incremental deployment as individual urban centres acting at the municipal level could start with localized infrastructure.  The provinces could eventually provide infrastructure to connect urban centres together and the federal government would eventually link together the provinces and other countries.  Below we will describe one possible implementation of such an infrastructure. In doing so we can see what the concrete benefits might be in addition to reducing dependency on private companies for infrastructure that in all respects should be a public good.&lt;br /&gt;
&lt;br /&gt;
===Implementation Description===&lt;br /&gt;
The implementation that we chose to explore for the purposes of this paper is a wireless mesh.  The mesh structure would exist in conjunction with the current infrastructure of the ISPs and, as such, it can be envisioned as an omnipresent overlay that provides alternative transportation for internet traffic.  At the simplest level this mesh would be composed of a number of fairly static nodes with high availability consisting of users home computers.  In addition there would be a large number of highly mobile nodes that would move and provide variable availability consisting of users laptop and internet aware personal devices.  The nodes would use algorithms to elect member to act as super nodes responsible for routing information.  While maintaining the accuracy of routing information is a significant challenge, research has been done that provides efficient mechanisms for doing this&amp;lt;ref name=&amp;quot;wirelessDart&amp;quot;&amp;gt;Jakob Eriksson, Michalis Faloutsos, Srikanth V. Krishnamurthy. DART: Dynamic Address RouTing for Scalable&lt;br /&gt;
Ad Hoc and Mesh Networks. IEEE/ACM TRANSACTIONS ON NETWORKING, January 2006. DOI=10.1109/TNET.2006.890092 [http://doi.acm.org/10.1145/1250000/1241842/p119-eriksson.pdf link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;wirelessFlooding&amp;quot;&amp;gt;&lt;br /&gt;
Pengfei Di, Thomas Fuhrman. Scalable Landmark Flooding - A Scalable Routing Protocol&lt;br /&gt;
for WSNs. CoNEXT Student Workshop’09, December 2009. DOI=10.1145/1658997.1658999 [http://doi.acm.org/10.1145/1660000/1658999 link]&amp;lt;/ref&amp;gt;. If higher speeds were desired, the urban centres the mesh is located in could provide a wired infrastructure with frequent wireless access points servicing &#039;neighborhoods&#039;&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;/&amp;gt;.  When the mesh density gets too low, presumably between urban centres, faster backbones would be added to connect these urban centres.  Conceivably the mesh might also extend to more distant locals, but performance would be severely impacted with very few nodes available to provide routing.  Finally, at the highest level, different countries could connect their meshes together.  As mentioned previously, these different levels of connection segment along in parallel with the levels of government that we have in Canada.  As wireless technology improves the speed and coverage of the mesh will improve as well and, as the level of support increases, the publicly offered speed could increase as well.  Potentially the privately owned ISP might even disappear entirely.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
The following list is a summary of the major advantages of having Internet infrastructure as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increase in speed&#039;&#039;&#039;&lt;br /&gt;
As envision, the publicly owned infrastructure would offload basic services such as email, instant messaging, and other similar services more tolerant of lower speeds from the conventional infrastructure. This would free up bandwidth on the privately owned ISPs.  This in turn would speed up access for members of the population who desire higher speed and the services dependent on it, such as video streaming.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Increased robustness&#039;&#039;&#039;&lt;br /&gt;
A mesh also provides significant increases in robustness.  A mesh presents not single point of connection, so it can not be disabled as easily as current ISPs can be.  Even if an portion of the mesh was partitioned from the Internet, it would continue to function within it&#039;s partition.  Considering the significant portion of the population that use the Internet to communicate, this could be a significant benefit in a disaster scenario. In such a scenario it is likely that other forms of communication relying on centralized infrastructure would fail while the mesh would continue to work.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universally provide a basic level of service&#039;&#039;&#039;&lt;br /&gt;
The publicly owned infrastructure would provide a basic level of service for everyone.  This could negate the need for ISPs for some users whose primary use of the internet might be surfing or visiting low-bandwidth websites.  This could also help make internet access available for fiscally disadvantaged members of the population as well.  Finally, a mesh topology has the potential to extend internet coverage to low density rural areas, as it has been used for this purpose in developing nations&amp;lt;ref name=&amp;quot;wirelessRural&amp;quot;/&amp;gt;.  Canada, due to our low population density, has areas that draw a parallel to these rural areas where the technology has been used.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable&#039;&#039;&#039;&lt;br /&gt;
A mesh supports incremental roll out.  It could start in a single neighborhood using the wireless of the neighbours to create small network.  As the mesh increases in size, the mesh can be self organizing with the composing nodes being elected to more prominent roles if they have sufficient speed.  The municipality could support this topology by adding wireless access points that could be attached to a higher speed wired infrastructure of the urban centre.  The density of connection points has been studied and there is relationship with this to the potential speeds that are sustainable by the mesh, again allowing incremental deployment but in the dimension of speed&amp;lt;ref name=&amp;quot;wirelessUrban&amp;quot;&amp;gt;Vinay Sridhara, Jonghyun Kim, Stephan Bohacek. Performance of Urban Mesh Networks∗. MSWiM’05, October 2005. DOI=10.1145/1089444.1089492 [http://doi.acm.org/10.1145/1089444.1089492 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Internet Infrastructure as a Public Good===&lt;br /&gt;
&lt;br /&gt;
While we feel the benefits outweigh the drawbacks, a summary of the disadvantages of making the infrastructure of the Internet a public good are presented here.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Public costs&#039;&#039;&#039;&lt;br /&gt;
Advocating that various levels of government participate in the provision of some infrastructure would necessitate an increase in taxes. Since the support would be at all levels of government, the taxes would be distributed at all levels becoming almost imperceptible.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Software Changes&#039;&#039;&#039;&lt;br /&gt;
To fully take advantage of the two level system of Internet access that the mesh overlay provides, some software would need to be changed.  An example of this would be email which is normally considered a low bandwidth service. If a large attachment were present it would make sense to take advantage of the faster network connection to download it.  Thus the software would have to be aware of the availability and the capability of the two networks and switch between them in specific cases.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs&#039;&#039;&#039;&lt;br /&gt;
Implementing a mesh, where the population provides some of the nodes active in routing and otherwise maintaining the network, incurs some cost.  This could be in the form of used CPU cycles and additional power usage to increase node availability.  Alternatively, a dedicated piece of hardware in the form of a wireless router with additional computational power could be a mandatory purchase.&lt;br /&gt;
&lt;br /&gt;
==Web Caching==&lt;br /&gt;
In general, the idea behind web caching is the temporary storage of web objects that can be used later without having to retrieve the data from the original server again. When a new web request is made, the resulting data is stored in a cache after being delivered to the end user. If another user requests the same data, barring certain conditions, the cached data is returned to the user and the request is not passed on to the originating web server. There are many aspects of many websites that do not change very often (ie. logos, static text, pictures, other multimedia) and hence are good candidates for caching &amp;lt;ref name=&amp;quot;visolve&amp;quot;&amp;gt;Optimized Bandwidth + Secured Access = Accelerated Data Delivery, Web Caching - A cost effective approach for organizations to address all types of bandwidth management challenges. A ViSolve White Paper. March 2009. [http://www.visolve.com/squid/whitepapers/ViSolve_Web_Caching.pdf link]&amp;lt;/ref&amp;gt;. Web caches can either exist on the end user&#039;s machine (in the browser, for instance) or can exist somewhere between the user the servers they wish to communicate with on what is known as a proxy server &amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;&amp;gt; Web Caching Overview. visited March 2011. [http://www.web-caching.com/welcome.html link] &amp;lt;/ref&amp;gt;. Internet Service Providers have a key interest in web caching and in most cases implement their own caches &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;&amp;gt;Geoff Huston. Web Caching. The Internet Protocol Journal Volume 2, No. 3. 2000. [http://www.cisco.com/web/about/ac123/ac147/ac174/ac199/about_cisco_ipj_archive_article09186a00800c8903.html link]&amp;lt;/ref&amp;gt;. There are a variety of incentives for entities on the internet, including ISPs, to use web caches. In general, these advantages can be summarized as follows:&lt;br /&gt;
*&#039;&#039;&#039;Reduced Bandwidth Usage&#039;&#039;&#039;&lt;br /&gt;
One of the main incentives for ISPs to use web caching is the reduction of outgoing web traffic which results in a reduction of overall bandwidth usage &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;&amp;gt;Jia Wang. A survey of web caching schemes for the Internet. SIGCOMM Comput. Commun. Rev. 29, 5 (October 1999), 36-46. DOI=10.1145/505696.505701 [http://doi.acm.org/10.1145/505696.505701 link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;&amp;gt; Web application/Caching. visited March 2011. last modified September 2010. [http://docforge.com/wiki/Web_application/Caching link]&amp;lt;/ref&amp;gt;. For a typical ISP, web based traffic can account for upwards of 70% of the total bandwidth used and, of this web based traffic, the level of similarity of requests can be as high as 50%&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;. It is also true that, for many ISPs, transmission costs dominate their overall operating costs and any reduction in requests that must be satisfied outside of the ISP are beneficial&amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Improved End User Experience&#039;&#039;&#039;&lt;br /&gt;
Another benefit of web caching is the apparent reduction in latency to the end user &amp;lt;ref name=&amp;quot;visolve&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;webcaching.com&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;. Instead of web requests traveling all the way to the desired web server, these requests are intercepted by a proxy server who can returned a cached version of the requested data. The fact that the total distance that the data had to travel is cut down significantly (as web caches are intended to be relatively close to the end user), the time to deliver the content to the end user can be cut down significantly. It has been found that small performance improvements made by an ISP through the use of caching can result in a significantly better end user experience&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
*&#039;&#039;&#039;Reduced Web Server Load&#039;&#039;&#039;&lt;br /&gt;
Web servers providing popular data also benefit from web caching. Popular websites translate into a high number of simultaneous connections and a high bandwidth usage by the providing web server &amp;lt;ref name=&amp;quot;cisco&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;. A web cache placed in front of a given web server can reduce the number of connections that need to be passed through by providing data it has stored. This can translate into reduced hardware and support costs&amp;lt;ref name=&amp;quot;docforge&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Additional advantages include the added robustness that a web cache adds to the internet, allowing users to access documents even if the supplying web server is down as well as allowing organizations to analyze internet usage patterns &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching Schemes===&lt;br /&gt;
Since web caching has been identified as significant asset to the internet as a whole, it has received it&#039;s fair share of research. Many different approaches to web caching have been proposed, many of which utilized distributed or hierarchical elements. These approaches will not be looked into in depth here as they will be considered merely implementation details. A survey of web caching schemes &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt; identified the main architectures that a large scale web cache can have. &lt;br /&gt;
&lt;br /&gt;
One of these is a hierarchical architecture. In such an architecture web caches are placed at different levels of a network, starting with the client&#039;s machine, followed by a local then regional and then finally a national level cache. In this type of system, web requests are first sent to the lowest level cache and passed along to higher levels until the request can be satisfied. Once it is satisfied, the data is travels back down the hierarchy, leaving a copy at each of the lower levels. Hierarchical web caches benefit from their efficient use of bandwidth by allowing popular web sites to propagate towards the demand. &lt;br /&gt;
&lt;br /&gt;
Another potential architecture is distributed web caching. In such a structure there is only one level of caches that cooperate with each other to satisfy web requests. To do this, each cache retains meta-data about the content of all of the other caches it cooperates with and uses it to fulfill web requests it receives from clients. This web caching scheme allows for better load balancing as well as introduces fault tolerance that was not available to strictly hierarchical structures. Examples of such systems &amp;lt;ref name=&amp;quot;distributed1&amp;quot;&amp;gt;Jong Ho Park and Kil To Chong. An Implementation of the Client-Based Distributed Web Caching System. Web Technologies Research and Development - APWeb 2005. Lecture Notes in Computer Science, 2005, Volume 3399/2005, 759-770, DOI: 10.1007/978-3-540-31849-1_73 [http://www.springerlink.com/content/mga3c714e9glr5el/ link]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;distributed2&amp;quot;&amp;gt;&lt;br /&gt;
Sandra G. Dykes, Clinton L. Jeffery, and Samir Das. Taxonomy and Design Analysis for Distributed Web Caching. In Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8 (HICSS &#039;99) [http://portal.acm.org/citation.cfm?id=876307 link 1], [http://www.google.ca/url?sa=t&amp;amp;source=web&amp;amp;cd=3&amp;amp;ved=0CCgQFjAC&amp;amp;url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.8.7799%26rep%3Drep1%26type%3Dpdf&amp;amp;rct=j&amp;amp;q=%22taxonomy%20and%20design%20analysis%20for%20distributed%20web%20caching%22&amp;amp;ei=mRqWTdnIJsjYgAe8i7GuCA&amp;amp;usg=AFQjCNGa-pNxW62SpjpwQmheA3KrH0nZ2A&amp;amp;sig2=htSW5Po4rEGbrd4LGVacmg link 2]&amp;lt;/ref&amp;gt;  have been implemented and shown to be effective in realistic web traffic scenarios.&lt;br /&gt;
&lt;br /&gt;
Finally, a third option for large scale web caches is a hybrid architecture. In such a system, a hierarchy of caches exists, however there are a number of caches on each level that cooperate with each other in a distributed fashion. This type of system can benefit from the combination of the different advantages that the hierarchical and distributed architectures provide. The Internet Cache Protocol &amp;lt;ref name=&amp;quot;icp&amp;quot;&amp;gt;D. Wessels and K. Claffy. Internet Cache Protocol (ICP), version 2. RFC 2186. 1997.&amp;lt;/ref&amp;gt; can be used to implement such a system where a cache hierarchy exists with a number of individual caches cooperating at each level &amp;lt;ref name=&amp;quot;survey&amp;quot;/&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Web Caching as a Public Good===&lt;br /&gt;
Web caching is obviously of enormous importance to the efficient functioning of the internet, and therefore is vitally important to the end users. Web caching ultimately succeeds by keeping relevant data close to the end users. Typically these web caches are currently implemented by ISPs, and they do so because it is in their financial interest and not because it is in the interest of their customers. Obviously their customer&#039;s satisfaction is important, but it is not their top priority. Transitioning ISP controlled web caches into a public good would allow for a balance between both the financial and end user experience aspects of web caching. This can be achieved by the government actually taking over the proxy servers that host the web caches or though strict regulations on exactly how web caching should be done. A benefit of this is that it allows for the standardization of web caching on all proxies. This doesn&#039;t mean that every web cache needs to be implemented in the exact same way, but it could allow for generic interfaces where web caches of all types could communicate with one another. This would then allow end users who are customers of one ISP to be able to be serviced by web caches that used to be available to customers of other ISPs.&lt;br /&gt;
&lt;br /&gt;
Not only would standardizing web caches at the ISP level allow for these previously private, uncooperative proxies to act more like distributed web caches, it would also allow for a natural hierarchy to be built. This hierarchy would be based on geography, where the ISP level caches would now work together to service a relatively small region, which would then be followed by a level of web caches that would service a larger geographical region, followed by provincial/state level web caches and finally a national level. These of course would all be standardized to allow for regional or provincial caches to serve web requests for users in different regions or provinces. Having formalized and standardized web hierarchies would allow for a reduction in wasted bandwidth and an improved end user experience. This would also remove redundant data stored in caches that previously would not or could not communicate with each other, increasing the overall storage capabilities. This increase in storage would allow for more web data to be stored in more places, which would translate into more robust web caches by becoming more fault tolerant.&lt;br /&gt;
&lt;br /&gt;
Once web caching becomes a public good it would also be in the end user&#039;s best interest to participate, if they could. This would essentially mean turning the lowest level of web caching (currently done on a user&#039;s machine) into a distributed web cache. This would allow for users to share their cache with each other and allow for the building of neighbourhood specific, ultra fast caches. This could be implemented by each user supplying a small amount of hard drive space as well as some computation cycles, similar to that of BOINC projects&amp;lt;ref name=&amp;quot;boinc&amp;gt;David P. Anderson. Public Computing: Reconnecting People to Science. Conference on Shared Knowledge and the Web. Residencia de Estudiantes, Madrid, Spain, Nov. 17-19 2003. [http://boinc.berkeley.edu/boinc2.pdf link]&amp;lt;/ref&amp;gt;. The end users machines can be simply used as passage storage devices, where the local, publicly owned or ISP controlled proxy server decided what data existed where and could point users to other users to satisfy web requests. On the other hand, the users&#039; machines could be active participants in the caching, receiving their user&#039;s requests and actually deciding what other users to contact to try and retrieve the data. In such a situation, any privacy concerns could be mediated by the local proxy server. It has been shown that local, peer-assisted data delivery solutions can remove a significant amount of network traffic currently done at the ISP level while also providing a noticeable performance increase &amp;lt;ref name=&amp;quot;p2p&amp;quot;&amp;gt;Thomas Karagiannis, Pablo Rodriguez and Konstantina Papagiannaki. Should internet service providers fear peer-assisted content distribution? In Proceedings of the 5th ACM SIGCOMM conference on Internet Measurement (IMC &#039;05). USENIX Association, Berkeley, CA, USA, 6-6. [http://portal.acm.org/citation.cfm?id=1251086.1251092 link]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Another option to allow for lower level distributed caching would be to extend the capabilities of the currently used cable or DSL modems. These new modems would have a relatively small amount of storage and computing power. This would remove the burden from the users&#039; computers and allow a special purpose device to take over. Since the majority of users would not reset their modems as often as they shut down their computers, this would allow for greater reliability than the previously described solution. As in the previous example, these devices could either participate as active or passive players in the overall web caching scheme, a detail that does not need to be decided upon before hand and can actually vary from neighbourhood to neighbourhood or even house to house depending on the given circumstances. Although this would entail an additional investment on the part of the user, with ever decreasing hardware costs a relatively powerful machine could be built, especially on a large scale, relatively inexpensively. Once devices like this became commonplace, it would allow for the storage and computational power of the local web cache to scale as more users joined, allowing for the overall capacity to grow linearly with the demand.&lt;br /&gt;
&lt;br /&gt;
===Extending Web Caching to Full Web Application Caching===&lt;br /&gt;
&lt;br /&gt;
If web caching were to become a public good and the underlying infrastructure described above was put in place, an opportunity to extend the classic definition of web caching would be possible. This new infrastructure could allow for the caching of web code on top of the static data that web caches currently hold. This would, in essence, allow for popular websites to &amp;quot;live&amp;quot; closer to the users that actually use the websites while using the locally cached data as well. In this type of system, the web developers would develop their applications to make use of the available resources and then maintain a minimal back end system to essentially tie everything together. There would no longer be a need to maintain enormous data centers to store all of the users&#039; data and the code to run their applications. This would mean that the prospect of a small number of people with a very good idea could realistically come together to implement their application. The more popular their application became would no longer necessarily dramatically increase their hardware and support costs as it does today. Essentially this would allow anyone to write the next Facebook or Google without needing enormous amount of financial or physical resources that face modern day corporations.   &lt;br /&gt;
&lt;br /&gt;
Another added benefit of this new definition of web caching is that it would allow for individual fragments of the internet who, for one reason or another, become disconnected from the internet as a whole to still communicate through the cached web applications and data that it has stored in it&#039;s web caches. This would mean that a region that is undergoing some major natural catastrophe such as an earthquake or even in circumstances where a section of the internet is willfully disconnected from the rest of the internet would still be able to communicate internally through the use of popular social networking websites such as Facebook as well as have access to all of the web data that is currently being stored in all of the reachable caches. This added robustness would certainly reduce the amount of panic inherent to these kinds of situations.&lt;br /&gt;
&lt;br /&gt;
===Advantages of Web Caching as a Public Good===&lt;br /&gt;
The following list is a summary of the major advantages of having web caching as a public good.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of wasted bandwidth.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With the standardized, hierarchical/distributed hybrid web caches proposed, the amount of wasted bandwidth in the form of unneeded web requests being sent out from the caches to the originating web servers will go down. Currently web caches implemented by different ISPs do not work together, so uncached web requests from users of one ISP that might be able to be satisfied by caches implemented by another local ISP must be retrieved all the way from the originating web server. With the proposed architecture, these caches could then work together, essentially multiplying the available cache size. This would result in these types of web requests being satisfied locally and reducing the amount of long distance web requests significantly.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Further reduction of latency and improved end user experience.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As noted above, with the massive increase in distributed local caching, the chances that users&#039; web requests can be satisfied are significantly improved. This then leads to a reduction in wait time for the end users, improving their overall web experience. This would be especially noticed if the lowest level of the caching hierarchy proposed above (the distributed, neighbourhood-level cache) was implemented. Web requests that could be satisfied within a user&#039;s immediate neighbourhood would be incredibly fast and would translate into an unparalleled web experience. For requests that must travel outside of the local caches, the existence of the standardized caching hierarchy would mean that more requests would be satisfied by the regional, provincial or national level caches than having to be sent all the way to the original web server. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Added Reliability/Robustness&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The fact that the proposed web caching strategy implements distributed caching at each level of the caching hierarchy, injects an added level of reliability that isn&#039;t present in modern web caching. Since it is likely that the storage space of the distributed caches at each level will be larger than the amount that can be efficiently used as a cache, this would allow for data duplication. This duplication would allow for fault tolerance and the caches could be implemented in such a way as to redistribute the remaining data in the event that a single cache went down. These proposed caches would also drastically improve reliability, especially with the full web application caching. Now in the event that a single region is disconnected from the internet, users would still be able to use popular web applications and data that are cached until they are reconnected. Web application programmers could take these scenarios into account and sync the local data with their back-end servers once a connection is reestablished, resulting in unprecedented reliability with the internet&#039;s most popular web sites and applications. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Inherently guaranteed basic level of service.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As mentioned in the previous point, the fact that full web applications would now be able to be cached would mean that any user would have full access to any web application or any data that is currently &#039;living&#039; on any reachable cache. This means that if a region is disconnected, all users in that region would be able to use any application or data that is stored in any cache anywhere in that region. This basic level of service is non-existent with modern web caches.  &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Control in the hands of the users.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
As with any entity that is put into the public hands, private interests in how web caches are controlled would now be a secondary concern to those of the public. This means that any innovation in web caching along with new technologies to improve how web caching is done can be implemented if it is in the best interests of the public. Currently we must rely on these upgrades being a worthy investment for a given ISP, regardless of how much upgrades would improve overall performance.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Incrementally deployable.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the proposed web caching scheme, both software and infrastructure wise, are incrementally deployable. It is imagined that a scheme similar to the one proposed would most likely start off in a few selected cities, maybe with a few neighbourhoods participating in local neighbourhood caches. Once these become popular, more could start up in other urban areas whouch could then be joined together by regional and provincial level caches. An important aspect of the proposed caches is that users have an incentive to join (the previously mentioned benefits) and, as more users/regions joined, the overall system would only get better.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of Web Caching as a Public Good===&lt;br /&gt;
&lt;br /&gt;
Along with the numerous benefits that making web caching a public good would produce, there would also be some significant disadvantages discussed below. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Infrastructure costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
First and foremost, it is certain that for the proposed web caching infrastructure to be put into place, even if it were to incorporate the already in place ISP caches, that there would be significant infrastructure costs. Although there may be a considerable amount of infrastructure available in large urban centers, it is likely that rural regions as well as caches in the higher levels of the hierarchy (provincial, national, etc) will need a sizable investment to produce the envisioned system.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Support costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
On top of the infrastructure costs would be the support costs. The proposed infrastructure would take a massive amount of work to either convert the old ISP caches or to setup the new caches. This setup would include the initial installation of the software as well as rigorous testing, which would require a significant number of man-hours. Once the systems are setup, they would have to be closely monitored and tuned as conditions changed in the regions that the given cache served. On top of this, the caches would require routine maintenance and service from specially trained individuals. Overall, the support costs alone will be quite substantial. &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Personal costs.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Lastly, the individual users would also incur a cost. Firstly they would most likely have to pay for both the infrastructure and support costs in a form of a tax or possible through usage fees. Secondly, if the low level neighbourhood caches were implemented, individual users would with have to provide CPU cycles and storage space (which, in itself is a cost) or will have to purchase specialized hardware (eg. a new modem as proposed) to be able to participate in the local cache.&lt;br /&gt;
&lt;br /&gt;
==DNS==&lt;br /&gt;
&lt;br /&gt;
With the Internet’s vast ubiquity, as mentioned in the previous sections, there is a requirement that a convenient method be in place to refer to different resources out there within this distributed system.  DNS (Domain Name System) aims to aid this process, by allowing resources be referred to by name, rather than a series of numbers.&lt;br /&gt;
&lt;br /&gt;
DNS can be considered as the &amp;quot;switchboard&amp;quot; of the internet. To make the system as a whole work in user friendly manner, a user or application needs only supply a name, and the service returns the IP address number and hostname. It is essential for the functionality and usability of the internet to have this service.&amp;lt;ref name=&amp;quot;DNS4&amp;quot;&amp;gt;Domain Name System - Accessed Mach 10, 2011 [http://en.wikipedia.org/wiki/Domain_Name_System]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the sake of this paper, many technical details are avoided and a more simplistic, higher level view of the system is taken on.  It is considered for the purposes of this discussion as a static, distributed tree that returns an IP address when queried with a domain name.&lt;br /&gt;
Given its necessity, the system is a good candidate to be considered a public good. The current provider of the service falls under the responsibility of an individual user&#039;s Internet Service Provider (ISP). A user&#039;s ISP maintains the database or tree of names to IP addresses for their users to access.&lt;br /&gt;
&lt;br /&gt;
===Implementation Overview===&lt;br /&gt;
&lt;br /&gt;
From a user’s perspective, there are two categories options when it comes to using DNS:&lt;br /&gt;
*Default Option (ISP)&lt;br /&gt;
*Alternatives (Public)&lt;br /&gt;
&lt;br /&gt;
====ISP====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For a standard user, an ISP takes care of the DNS service.  It is understood by the user that all internet requests can be filtered or redirected as the ISP sees fit.  For example, two of Canada&#039;s biggest providers, Bell Canada and Rogers Communications, offer advertising-based redirects when and if a user seeks a non-existant URL.  This can be seen as helpful (in the event of typos) or a nuisance(suggestions based on advertising).&amp;lt;ref name=&amp;quot;DNS1&amp;quot;&amp;gt;Rogers Implements New Approach On Failed DNS Lookups, July 18, 2008 - Accessed Marc 15, 2011[http://www.michaelgeist.ca/content/view/3199/1/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS2&amp;quot;&amp;gt;Rogers latest ISP to &amp;quot;help&amp;quot; customers with DNS redirects, July 2008 -  Accessed March 15, 2011[http://arstechnica.com/old/content/2008/07/rogers-latest-isp-to-help-customers-with-dns-redirects.ars]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS3&amp;quot;&amp;gt;Bell Starts Hijacking NX Domain Queries, August 2009 - Accessed March 12, 2011[http://tech.slashdot.org/story/09/08/04/1512248/Bell-Starts-Hijacking-NX-Domain-Queries]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Alternative/Public====&lt;br /&gt;
&lt;br /&gt;
More knowledgable users can configure a setup where their DNS requests are processed via any number of alternative options such as Google&#039;s public DNS project, or OpenDNS.  This can be a healthy approach to avoid the ISP issues, but still imparts significant trust on another corporation or &amp;quot;good samaritans&amp;quot; in a public community.&lt;br /&gt;
&lt;br /&gt;
Issues arise when considering user privacy though.  &amp;lt;ref name =&amp;quot;DNS5&amp;quot;&amp;gt;Google Public DNS: Good for privacy? - Accessed March 2011[http://features.techworld.com/networking/3208133/google-public-dns-good-for-privacy/]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS6&amp;quot;&amp;gt;Google Public DNS: Wonderful Freebie or Big New Menace? - Accessed March 2011[http://www.pcworld.com/businesscenter/article/183650/google_public_dns_wonderful_freebie_or_big_new_menace.html]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;DNS7&amp;quot;&amp;gt;Free Fast Public DNS Servers List - Accessed March 2011[http://theos.in/windows-xp/free-fast-public-dns-server-list/]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Implementation Issues====&lt;br /&gt;
While the system is functional for what the majority of users and applications require, there are some problems that arise with the current implementation.  These issues arise around bottlenecks, update propagation, attack resiliency and general performance.  Before any replacement system is to be considered, these issues need to be improved upon.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Bottlenecks&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The current system is susceptible to bottleneck effects due to a love number of servers to be accessed by many users.  For example, Bell Canada customers are served by two servers for the entire country for this service.  Just as web caching has been shown in a previous section to improve general browsing and decrease latency, the same concept can be used for DNS lookups.&lt;br /&gt;
The small number of servers results in affecting the attack resiliency as well, since these servers represent single points of failure.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Update Propagation&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Any change to a domain’s nameserver can take up to 48 hours to propagate across the internet.  DNS servers around the world have a static schedule for which their records are updated, and when considering caching, this period is required to get the changes across.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Attack Resiliency&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The single points of failure, indicated by the bottleneck issue, make the entire system highly susceptible to Denial of Service (DoS) attacks.  Malicious users can target the limited servers to severely cramp internet traffic at any time.  Measures are in place to prevent this kind of measure, however, like anything security based, it requires constant monitoring and change in approach as malicious users evolve their techniques.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Performance&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Processing will always improve, but the combination of the all the factors mentioned above leaves much improvement for performance and robustness issues.&lt;br /&gt;
&lt;br /&gt;
===DNS Evolution===&lt;br /&gt;
Research is being done on improving the indicated aspects of DNS.  One candidate as a next generation naming system that is being actively researched at Cornell University, is entitled Cooperative Domain Name System (CoDoNS).&amp;lt;ref name=&amp;quot;CoDoNS&amp;quot;&amp;gt;The Design and Implementation of a Next Generation Name Service for the Internet - Accessed March 2011[http://conferences.sigcomm.org/sigcomm/2004/papers/p292-ramasubramanian1111.pdf]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Through a structure based on caching and peer to peer distribution, the system boasts an improvement on all the factors indicated above.  It also adds the benefit that it is incrementally deployable which is a very important point when it comes to upgrading any part of a complex, distributed system like the internet.&lt;br /&gt;
&lt;br /&gt;
Due to the high-level nature of the discussion of this report, technical specifications will be avoided in favour of looking at the role of DNS as a whole for its use as a public good.&lt;br /&gt;
&lt;br /&gt;
===DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
DNS, regardless of its implementation, needs to be a service that is both reliable and trusted.  A user base is dependent on some form of trusted source, whether it is a governed initiative, a corporately controlled process, or user contributed service.  Overall, the necessity of the service makes it a prototypical candidate to be a public good.  It is required to access and use the internet as we do today.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Advantages of DNS as a Public Good===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Reliability/Trust Solved&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
With this public’s best interest in maintaining this service, the reliability and trust issue is satisfied.  Users must trust some entity for the service, so it is essential that this entity have the public’s best intentions in mind.  Misinformation and misdirection will be averted by assuming trust in whatever public authority applies.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Uptake&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When it comes to implementing any sort of next generation service or upgrade, it will be done when deemed most ideal for the public.  The incremental deployment of the CoDoNS service &lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Universal basic service&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
When combined with caching and a physical network infrastructure, localized systems could function as pockets of the internet.&lt;br /&gt;
&lt;br /&gt;
===Disadvantages of DNS as a Public Good===&lt;br /&gt;
*&#039;&#039;&#039;Privacy&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Governing authority will have the capability to observe and even log user behaviour.  This is a major issue if the authority is not trustworthy, so it is required that any such orientation work in concert with some form of privacy commission.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Cost&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Maintaining and acquiring or mandating the current system will impose a financial burden on the public, as any good will that is brought to the hands of the public.&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;Next Generation Implementations&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
It is possible that private corporations or independent organizations would implement newer schemas sooner than some form of public authority, given less overhead or caution when it comes to decision making.  Users may miss out on the newest services available as the authority evaluates any upgrade options.&lt;br /&gt;
&lt;br /&gt;
=General Public Goods and the Internet=&lt;br /&gt;
&lt;br /&gt;
After analyzing the proposed candidates for public goods with respect to the internet, we identified many qualities that these entities had in common. Building from these qualities, we believe that the following list can be used as a set of basic criteria that a given aspect of the Internet should meet before being nominated as a candidate for becoming a public good.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Essential Component of the Internet&#039;&#039;&#039;&lt;br /&gt;
For an aspect of the Internet to become a public good, it should be an aspect that is fundamental to the overall functionality of the Internet as a whole. If this is not the case, then the public could start owing aspects that are not permanent and will become obsolete quickly. This will then mean that aspects of the Internet will cycle through the public&#039;s hands very quickly, which would end be very expensive. In general, novel aspects of the internet should be left in private hands and only after these aspects have proven themselves to be vital should they be looked as a potential public good.&lt;br /&gt;
* &#039;&#039;&#039;Adds Robustness and Reliability&#039;&#039;&#039;&lt;br /&gt;
Since the Internet itself is a huge, distributed system, robustness and reliability are key. If transitioning an aspect of the internet into the public&#039;s hands can improve this, it will improve the overall effectiveness of the Internet.&lt;br /&gt;
* &#039;&#039;&#039;Ensure a Basic Level of Service&#039;&#039;&#039;&lt;br /&gt;
Since public goods are defined to be something that everyone should have access to and something that is deemed essential, ensuring a basic level of service for users for a given Internet public good is essential. If an aspect of the Internet cannot be given guaranteed access to all of the users in it&#039;s reach then it should not be considered a public good, by definition.&lt;br /&gt;
* &#039;&#039;&#039;Improve Performance&#039;&#039;&#039;&lt;br /&gt;
Performance is always a key metric when discussing any distributed system and it is a key concern here as well. &lt;br /&gt;
* &#039;&#039;&#039;Makes the User Experience a Priority&#039;&#039;&#039;&lt;br /&gt;
With all things considered, the end user&#039;s experience is one of the most important factors when thinking about public goods on the Internet. Currently many of the aspects of the Internet that would make good candidates for public goods have a large impact on the end user&#039;s experience. The parties that control these resources have their own priorities and sometimes these are at odds with what would be best for the user. &lt;br /&gt;
* &#039;&#039;&#039;Incrementally Deployable&#039;&#039;&#039;&lt;br /&gt;
As in any distributed system, any changes or improvements must be incrementally deployable. Generally speaking, these public goods should be able to be tried in certain locations before they are widely introduced. This allows for these new systems to grow dynamically, starting in areas that need them the most and ending up in more remote regions.&lt;br /&gt;
 &lt;br /&gt;
==From Presentation==&lt;br /&gt;
Transitioning a current aspect of the Internet into a public good should:&lt;br /&gt;
* add robustness and reliability&lt;br /&gt;
* ensure a basic level of service&lt;br /&gt;
* generally improve performance&lt;br /&gt;
* make the user experience a priority over private interests &lt;br /&gt;
* be incrementally deployable&lt;br /&gt;
 &lt;br /&gt;
Generally, potential disadvantages include:&lt;br /&gt;
* added infrastructure and support costs&lt;br /&gt;
* added complexity to application coders&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we know, the internet is becoming a ubiquitous entity in modern day society, and access to it for is becoming more and more essential as time goes by. It is because of this necessity that people will have a much greater incentive in owning and controlling how the Internet works, basically transitioning the Internet into a public good. However, there are significant portions of the Internet that would be undesirable or unable to be held by the public. Many modern businesses rely on the Internet today for a significant portion of their revenue and are actually responsible for a lot of innovation and evolution within the Internet itself. For these reasons it makes more sense to bring portions of the Internet under the public&#039;s control.&lt;br /&gt;
&lt;br /&gt;
We first focused on which aspects of the Internet to convert into public goods by examining three ideal candidates: physical infrastructure, web caching and DNS. From examining these entities, a list of common criteria that could be used to identify future public goods on the Internet were exposed. By using this set of criteria one would be able to successfully identify future public goods candidates.&lt;br /&gt;
&lt;br /&gt;
Moving into the future, the Internet is going to play a larger and larger role in our day-to-day lives. For this reason alone it will be vital to ensure that the Internet itself will evolve with its changing demands and for the fundamental aspects of the Internet to be secured. The best and only true way of doing this will be to give the users the overall control.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==From Presentation==&lt;br /&gt;
* Since the Internet is becoming a ubiquitous entity, access to it is now essential&lt;br /&gt;
* Due to the nature of the Internet, it is impossible and undesirable for a total conversion to a public good&lt;br /&gt;
* We have identified three aspects of the Internet as ideal candidates to become public goods &lt;br /&gt;
* These candidates brought forth a list of criteria that a potential aspect of the Internet would need to fulfill to become a public good&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Miscellaneous=&lt;br /&gt;
==Members==&lt;br /&gt;
*Lester Mundt - lmundt at connect.carleton.ca&lt;br /&gt;
*Fahim Rahman - frahman at connect.carleton.ca&lt;br /&gt;
*Andrew Schoenrock - aschoenr at scs.carleton.ca&lt;br /&gt;
&lt;br /&gt;
==Presentation==&lt;br /&gt;
[https://docs.google.com/present/edit?id=0AYULfbx_Ww_hZDZ3YnNicF8yZjR2Yng2YzI&amp;amp;hl=en&amp;amp;authkey=CNqB9o0G As presented April 5, 2011]&lt;/div&gt;</summary>
		<author><name>Aschoenr</name></author>
	</entry>
</feed>